00:00:00.001 Started by upstream project "autotest-nightly" build number 4338 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3701 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.163 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.164 The recommended git tool is: git 00:00:00.164 using credential 00000000-0000-0000-0000-000000000002 00:00:00.166 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.223 Fetching changes from the remote Git repository 00:00:00.227 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.278 Using shallow fetch with depth 1 00:00:00.278 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.278 > git --version # timeout=10 00:00:00.318 > git --version # 'git version 2.39.2' 00:00:00.318 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.351 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.351 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.565 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.578 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.596 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.596 > git config core.sparsecheckout # timeout=10 00:00:08.610 > git read-tree -mu HEAD # timeout=10 00:00:08.627 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.650 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.651 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.756 [Pipeline] Start of Pipeline 00:00:08.771 [Pipeline] library 00:00:08.772 Loading library shm_lib@master 00:00:08.773 Library shm_lib@master is cached. Copying from home. 00:00:08.787 [Pipeline] node 00:00:08.796 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.797 [Pipeline] { 00:00:08.810 [Pipeline] catchError 00:00:08.812 [Pipeline] { 00:00:08.824 [Pipeline] wrap 00:00:08.831 [Pipeline] { 00:00:08.837 [Pipeline] stage 00:00:08.841 [Pipeline] { (Prologue) 00:00:09.046 [Pipeline] sh 00:00:09.323 + logger -p user.info -t JENKINS-CI 00:00:09.345 [Pipeline] echo 00:00:09.347 Node: GP11 00:00:09.369 [Pipeline] sh 00:00:09.677 [Pipeline] setCustomBuildProperty 00:00:09.691 [Pipeline] echo 00:00:09.692 Cleanup processes 00:00:09.697 [Pipeline] sh 00:00:09.982 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.982 2740326 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.995 [Pipeline] sh 00:00:10.281 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.281 ++ awk '{print $1}' 00:00:10.281 ++ grep -v 'sudo pgrep' 00:00:10.281 + sudo kill -9 00:00:10.281 + true 00:00:10.297 [Pipeline] cleanWs 00:00:10.307 [WS-CLEANUP] Deleting project workspace... 00:00:10.307 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.314 [WS-CLEANUP] done 00:00:10.318 [Pipeline] setCustomBuildProperty 00:00:10.333 [Pipeline] sh 00:00:10.617 + sudo git config --global --replace-all safe.directory '*' 00:00:10.707 [Pipeline] httpRequest 00:00:11.312 [Pipeline] echo 00:00:11.315 Sorcerer 10.211.164.20 is alive 00:00:11.326 [Pipeline] retry 00:00:11.328 [Pipeline] { 00:00:11.342 [Pipeline] httpRequest 00:00:11.346 HttpMethod: GET 00:00:11.346 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.347 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.374 Response Code: HTTP/1.1 200 OK 00:00:11.374 Success: Status code 200 is in the accepted range: 200,404 00:00:11.374 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:31.029 [Pipeline] } 00:00:31.047 [Pipeline] // retry 00:00:31.056 [Pipeline] sh 00:00:31.339 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:31.354 [Pipeline] httpRequest 00:00:31.760 [Pipeline] echo 00:00:31.762 Sorcerer 10.211.164.20 is alive 00:00:31.771 [Pipeline] retry 00:00:31.773 [Pipeline] { 00:00:31.786 [Pipeline] httpRequest 00:00:31.791 HttpMethod: GET 00:00:31.792 URL: http://10.211.164.20/packages/spdk_a5e6ecf28fd8e9a86690362af173cd2cf51891ee.tar.gz 00:00:31.792 Sending request to url: http://10.211.164.20/packages/spdk_a5e6ecf28fd8e9a86690362af173cd2cf51891ee.tar.gz 00:00:31.814 Response Code: HTTP/1.1 200 OK 00:00:31.814 Success: Status code 200 is in the accepted range: 200,404 00:00:31.814 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a5e6ecf28fd8e9a86690362af173cd2cf51891ee.tar.gz 00:01:25.780 [Pipeline] } 00:01:25.793 [Pipeline] // retry 00:01:25.801 [Pipeline] sh 00:01:26.088 + tar --no-same-owner -xf spdk_a5e6ecf28fd8e9a86690362af173cd2cf51891ee.tar.gz 00:01:28.631 [Pipeline] sh 00:01:28.917 + git -C spdk log --oneline -n5 00:01:28.917 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:01:28.917 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:01:28.917 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:01:28.917 e2dfdf06c accel/mlx5: Register post_poller handler 00:01:28.917 3c8001115 accel/mlx5: More precise condition to update DB 00:01:28.929 [Pipeline] } 00:01:28.942 [Pipeline] // stage 00:01:28.951 [Pipeline] stage 00:01:28.954 [Pipeline] { (Prepare) 00:01:28.968 [Pipeline] writeFile 00:01:28.980 [Pipeline] sh 00:01:29.262 + logger -p user.info -t JENKINS-CI 00:01:29.276 [Pipeline] sh 00:01:29.563 + logger -p user.info -t JENKINS-CI 00:01:29.576 [Pipeline] sh 00:01:29.864 + cat autorun-spdk.conf 00:01:29.864 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.864 SPDK_TEST_NVMF=1 00:01:29.864 SPDK_TEST_NVME_CLI=1 00:01:29.864 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.864 SPDK_TEST_NVMF_NICS=e810 00:01:29.864 SPDK_RUN_ASAN=1 00:01:29.864 SPDK_RUN_UBSAN=1 00:01:29.864 NET_TYPE=phy 00:01:29.873 RUN_NIGHTLY=1 00:01:29.878 [Pipeline] readFile 00:01:29.906 [Pipeline] withEnv 00:01:29.909 [Pipeline] { 00:01:29.924 [Pipeline] sh 00:01:30.214 + set -ex 00:01:30.214 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:30.214 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:30.214 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.214 ++ SPDK_TEST_NVMF=1 00:01:30.214 ++ SPDK_TEST_NVME_CLI=1 00:01:30.214 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.214 ++ SPDK_TEST_NVMF_NICS=e810 00:01:30.214 ++ SPDK_RUN_ASAN=1 00:01:30.214 ++ SPDK_RUN_UBSAN=1 00:01:30.214 ++ NET_TYPE=phy 00:01:30.214 ++ RUN_NIGHTLY=1 00:01:30.214 + case $SPDK_TEST_NVMF_NICS in 00:01:30.214 + DRIVERS=ice 00:01:30.214 + [[ tcp == \r\d\m\a ]] 00:01:30.214 + [[ -n ice ]] 00:01:30.214 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:30.214 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:30.215 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:30.215 rmmod: ERROR: Module irdma is not currently loaded 00:01:30.215 rmmod: ERROR: Module i40iw is not currently loaded 00:01:30.215 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:30.215 + true 00:01:30.215 + for D in $DRIVERS 00:01:30.215 + sudo modprobe ice 00:01:30.215 + exit 0 00:01:30.225 [Pipeline] } 00:01:30.243 [Pipeline] // withEnv 00:01:30.249 [Pipeline] } 00:01:30.265 [Pipeline] // stage 00:01:30.275 [Pipeline] catchError 00:01:30.277 [Pipeline] { 00:01:30.292 [Pipeline] timeout 00:01:30.292 Timeout set to expire in 1 hr 0 min 00:01:30.294 [Pipeline] { 00:01:30.308 [Pipeline] stage 00:01:30.310 [Pipeline] { (Tests) 00:01:30.325 [Pipeline] sh 00:01:30.612 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.612 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.612 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.612 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:30.612 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:30.612 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:30.612 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:30.612 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:30.612 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:30.612 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:30.612 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:30.612 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.613 + source /etc/os-release 00:01:30.613 ++ NAME='Fedora Linux' 00:01:30.613 ++ VERSION='39 (Cloud Edition)' 00:01:30.613 ++ ID=fedora 00:01:30.613 ++ VERSION_ID=39 00:01:30.613 ++ VERSION_CODENAME= 00:01:30.613 ++ PLATFORM_ID=platform:f39 00:01:30.613 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:30.613 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:30.613 ++ LOGO=fedora-logo-icon 00:01:30.613 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:30.613 ++ HOME_URL=https://fedoraproject.org/ 00:01:30.613 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:30.613 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:30.613 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:30.613 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:30.613 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:30.613 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:30.613 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:30.613 ++ SUPPORT_END=2024-11-12 00:01:30.613 ++ VARIANT='Cloud Edition' 00:01:30.613 ++ VARIANT_ID=cloud 00:01:30.613 + uname -a 00:01:30.613 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:30.613 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:31.553 Hugepages 00:01:31.553 node hugesize free / total 00:01:31.553 node0 1048576kB 0 / 0 00:01:31.553 node0 2048kB 0 / 0 00:01:31.553 node1 1048576kB 0 / 0 00:01:31.553 node1 2048kB 0 / 0 00:01:31.553 00:01:31.553 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:31.553 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:31.553 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:31.553 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:31.553 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:31.553 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:31.812 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:31.812 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:31.812 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:31.812 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:31.812 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:31.812 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:31.812 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:31.812 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:31.812 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:31.812 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:31.812 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:31.812 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:31.812 + rm -f /tmp/spdk-ld-path 00:01:31.812 + source autorun-spdk.conf 00:01:31.812 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.812 ++ SPDK_TEST_NVMF=1 00:01:31.812 ++ SPDK_TEST_NVME_CLI=1 00:01:31.812 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:31.812 ++ SPDK_TEST_NVMF_NICS=e810 00:01:31.812 ++ SPDK_RUN_ASAN=1 00:01:31.812 ++ SPDK_RUN_UBSAN=1 00:01:31.812 ++ NET_TYPE=phy 00:01:31.812 ++ RUN_NIGHTLY=1 00:01:31.812 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:31.812 + [[ -n '' ]] 00:01:31.812 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:31.812 + for M in /var/spdk/build-*-manifest.txt 00:01:31.812 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:31.812 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:31.812 + for M in /var/spdk/build-*-manifest.txt 00:01:31.812 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:31.812 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:31.812 + for M in /var/spdk/build-*-manifest.txt 00:01:31.812 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:31.812 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:31.812 ++ uname 00:01:31.812 + [[ Linux == \L\i\n\u\x ]] 00:01:31.812 + sudo dmesg -T 00:01:31.812 + sudo dmesg --clear 00:01:31.812 + dmesg_pid=2741118 00:01:31.812 + [[ Fedora Linux == FreeBSD ]] 00:01:31.812 + sudo dmesg -Tw 00:01:31.812 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:31.812 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:31.812 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:31.812 + [[ -x /usr/src/fio-static/fio ]] 00:01:31.812 + export FIO_BIN=/usr/src/fio-static/fio 00:01:31.812 + FIO_BIN=/usr/src/fio-static/fio 00:01:31.812 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:31.812 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:31.812 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:31.812 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:31.812 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:31.812 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:31.812 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:31.812 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:31.812 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:31.812 00:39:04 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:31.812 00:39:04 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:31.812 00:39:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.812 00:39:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:31.812 00:39:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:31.812 00:39:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:31.812 00:39:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:31.812 00:39:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_ASAN=1 00:01:31.812 00:39:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:31.812 00:39:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:31.812 00:39:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:01:31.812 00:39:04 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:31.812 00:39:04 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:31.812 00:39:04 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:31.812 00:39:04 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:31.812 00:39:04 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:31.812 00:39:04 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:31.812 00:39:04 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:31.812 00:39:04 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:31.812 00:39:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.813 00:39:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.813 00:39:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.813 00:39:04 -- paths/export.sh@5 -- $ export PATH 00:01:31.813 00:39:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.074 00:39:04 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:32.074 00:39:04 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:32.074 00:39:04 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733441944.XXXXXX 00:01:32.074 00:39:04 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733441944.tnrZda 00:01:32.074 00:39:04 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:32.074 00:39:04 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:32.074 00:39:04 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:32.074 00:39:04 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:32.074 00:39:04 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:32.074 00:39:04 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:32.074 00:39:04 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:32.074 00:39:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.074 00:39:04 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:32.074 00:39:04 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:32.074 00:39:04 -- pm/common@17 -- $ local monitor 00:01:32.074 00:39:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.074 00:39:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.074 00:39:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.074 00:39:04 -- pm/common@21 -- $ date +%s 00:01:32.074 00:39:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.074 00:39:04 -- pm/common@21 -- $ date +%s 00:01:32.074 00:39:04 -- pm/common@25 -- $ sleep 1 00:01:32.074 00:39:04 -- pm/common@21 -- $ date +%s 00:01:32.074 00:39:04 -- pm/common@21 -- $ date +%s 00:01:32.074 00:39:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733441944 00:01:32.074 00:39:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733441944 00:01:32.074 00:39:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733441944 00:01:32.074 00:39:04 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733441944 00:01:32.074 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733441944_collect-cpu-load.pm.log 00:01:32.074 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733441944_collect-vmstat.pm.log 00:01:32.074 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733441944_collect-cpu-temp.pm.log 00:01:32.074 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733441944_collect-bmc-pm.bmc.pm.log 00:01:33.049 00:39:05 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:33.049 00:39:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:33.049 00:39:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:33.049 00:39:05 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:33.049 00:39:05 -- spdk/autobuild.sh@16 -- $ date -u 00:01:33.049 Thu Dec 5 11:39:05 PM UTC 2024 00:01:33.049 00:39:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:33.049 v25.01-pre-303-ga5e6ecf28 00:01:33.049 00:39:05 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:33.049 00:39:05 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:33.049 00:39:05 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:33.049 00:39:05 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:33.049 00:39:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.049 ************************************ 00:01:33.049 START TEST asan 00:01:33.049 ************************************ 00:01:33.049 00:39:05 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:33.049 using asan 00:01:33.049 00:01:33.049 real 0m0.000s 00:01:33.049 user 0m0.000s 00:01:33.049 sys 0m0.000s 00:01:33.049 00:39:05 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:33.049 00:39:05 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:33.049 ************************************ 00:01:33.049 END TEST asan 00:01:33.049 ************************************ 00:01:33.049 00:39:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:33.049 00:39:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:33.049 00:39:05 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:33.049 00:39:05 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:33.049 00:39:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.049 ************************************ 00:01:33.049 START TEST ubsan 00:01:33.049 ************************************ 00:01:33.050 00:39:05 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:33.050 using ubsan 00:01:33.050 00:01:33.050 real 0m0.000s 00:01:33.050 user 0m0.000s 00:01:33.050 sys 0m0.000s 00:01:33.050 00:39:05 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:33.050 00:39:05 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:33.050 ************************************ 00:01:33.050 END TEST ubsan 00:01:33.050 ************************************ 00:01:33.050 00:39:05 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:33.050 00:39:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:33.050 00:39:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:33.050 00:39:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:33.050 00:39:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:33.050 00:39:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:33.050 00:39:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:33.050 00:39:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:33.050 00:39:05 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:33.050 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:33.050 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:33.308 Using 'verbs' RDMA provider 00:01:44.227 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:54.243 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:54.243 Creating mk/config.mk...done. 00:01:54.243 Creating mk/cc.flags.mk...done. 00:01:54.243 Type 'make' to build. 00:01:54.243 00:39:26 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:01:54.243 00:39:26 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:54.243 00:39:26 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:54.243 00:39:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.243 ************************************ 00:01:54.243 START TEST make 00:01:54.243 ************************************ 00:01:54.243 00:39:26 make -- common/autotest_common.sh@1129 -- $ make -j48 00:01:54.243 make[1]: Nothing to be done for 'all'. 00:02:04.270 The Meson build system 00:02:04.270 Version: 1.5.0 00:02:04.270 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:04.270 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:04.270 Build type: native build 00:02:04.270 Program cat found: YES (/usr/bin/cat) 00:02:04.270 Project name: DPDK 00:02:04.270 Project version: 24.03.0 00:02:04.270 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:04.270 C linker for the host machine: cc ld.bfd 2.40-14 00:02:04.270 Host machine cpu family: x86_64 00:02:04.270 Host machine cpu: x86_64 00:02:04.270 Message: ## Building in Developer Mode ## 00:02:04.270 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:04.270 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:04.270 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:04.270 Program python3 found: YES (/usr/bin/python3) 00:02:04.270 Program cat found: YES (/usr/bin/cat) 00:02:04.270 Compiler for C supports arguments -march=native: YES 00:02:04.270 Checking for size of "void *" : 8 00:02:04.270 Checking for size of "void *" : 8 (cached) 00:02:04.270 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:04.270 Library m found: YES 00:02:04.270 Library numa found: YES 00:02:04.270 Has header "numaif.h" : YES 00:02:04.270 Library fdt found: NO 00:02:04.270 Library execinfo found: NO 00:02:04.270 Has header "execinfo.h" : YES 00:02:04.270 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:04.270 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:04.270 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:04.270 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:04.270 Run-time dependency openssl found: YES 3.1.1 00:02:04.270 Run-time dependency libpcap found: YES 1.10.4 00:02:04.270 Has header "pcap.h" with dependency libpcap: YES 00:02:04.270 Compiler for C supports arguments -Wcast-qual: YES 00:02:04.270 Compiler for C supports arguments -Wdeprecated: YES 00:02:04.270 Compiler for C supports arguments -Wformat: YES 00:02:04.270 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:04.270 Compiler for C supports arguments -Wformat-security: NO 00:02:04.270 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:04.270 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:04.270 Compiler for C supports arguments -Wnested-externs: YES 00:02:04.270 Compiler for C supports arguments -Wold-style-definition: YES 00:02:04.270 Compiler for C supports arguments -Wpointer-arith: YES 00:02:04.270 Compiler for C supports arguments -Wsign-compare: YES 00:02:04.270 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:04.270 Compiler for C supports arguments -Wundef: YES 00:02:04.270 Compiler for C supports arguments -Wwrite-strings: YES 00:02:04.270 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:04.270 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:04.270 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:04.270 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:04.270 Program objdump found: YES (/usr/bin/objdump) 00:02:04.270 Compiler for C supports arguments -mavx512f: YES 00:02:04.270 Checking if "AVX512 checking" compiles: YES 00:02:04.270 Fetching value of define "__SSE4_2__" : 1 00:02:04.270 Fetching value of define "__AES__" : 1 00:02:04.270 Fetching value of define "__AVX__" : 1 00:02:04.270 Fetching value of define "__AVX2__" : (undefined) 00:02:04.270 Fetching value of define "__AVX512BW__" : (undefined) 00:02:04.270 Fetching value of define "__AVX512CD__" : (undefined) 00:02:04.270 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:04.270 Fetching value of define "__AVX512F__" : (undefined) 00:02:04.270 Fetching value of define "__AVX512VL__" : (undefined) 00:02:04.270 Fetching value of define "__PCLMUL__" : 1 00:02:04.270 Fetching value of define "__RDRND__" : 1 00:02:04.270 Fetching value of define "__RDSEED__" : (undefined) 00:02:04.270 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:04.270 Fetching value of define "__znver1__" : (undefined) 00:02:04.270 Fetching value of define "__znver2__" : (undefined) 00:02:04.270 Fetching value of define "__znver3__" : (undefined) 00:02:04.270 Fetching value of define "__znver4__" : (undefined) 00:02:04.270 Library asan found: YES 00:02:04.270 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:04.270 Message: lib/log: Defining dependency "log" 00:02:04.270 Message: lib/kvargs: Defining dependency "kvargs" 00:02:04.270 Message: lib/telemetry: Defining dependency "telemetry" 00:02:04.270 Library rt found: YES 00:02:04.270 Checking for function "getentropy" : NO 00:02:04.270 Message: lib/eal: Defining dependency "eal" 00:02:04.270 Message: lib/ring: Defining dependency "ring" 00:02:04.270 Message: lib/rcu: Defining dependency "rcu" 00:02:04.270 Message: lib/mempool: Defining dependency "mempool" 00:02:04.270 Message: lib/mbuf: Defining dependency "mbuf" 00:02:04.270 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:04.270 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:04.270 Compiler for C supports arguments -mpclmul: YES 00:02:04.270 Compiler for C supports arguments -maes: YES 00:02:04.270 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:04.270 Compiler for C supports arguments -mavx512bw: YES 00:02:04.270 Compiler for C supports arguments -mavx512dq: YES 00:02:04.270 Compiler for C supports arguments -mavx512vl: YES 00:02:04.270 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:04.270 Compiler for C supports arguments -mavx2: YES 00:02:04.270 Compiler for C supports arguments -mavx: YES 00:02:04.270 Message: lib/net: Defining dependency "net" 00:02:04.270 Message: lib/meter: Defining dependency "meter" 00:02:04.270 Message: lib/ethdev: Defining dependency "ethdev" 00:02:04.270 Message: lib/pci: Defining dependency "pci" 00:02:04.270 Message: lib/cmdline: Defining dependency "cmdline" 00:02:04.270 Message: lib/hash: Defining dependency "hash" 00:02:04.270 Message: lib/timer: Defining dependency "timer" 00:02:04.270 Message: lib/compressdev: Defining dependency "compressdev" 00:02:04.270 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:04.270 Message: lib/dmadev: Defining dependency "dmadev" 00:02:04.270 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:04.270 Message: lib/power: Defining dependency "power" 00:02:04.270 Message: lib/reorder: Defining dependency "reorder" 00:02:04.270 Message: lib/security: Defining dependency "security" 00:02:04.270 Has header "linux/userfaultfd.h" : YES 00:02:04.270 Has header "linux/vduse.h" : YES 00:02:04.270 Message: lib/vhost: Defining dependency "vhost" 00:02:04.270 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:04.270 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:04.270 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:04.270 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:04.270 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:04.270 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:04.270 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:04.270 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:04.270 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:04.270 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:04.270 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:04.270 Configuring doxy-api-html.conf using configuration 00:02:04.270 Configuring doxy-api-man.conf using configuration 00:02:04.270 Program mandb found: YES (/usr/bin/mandb) 00:02:04.270 Program sphinx-build found: NO 00:02:04.270 Configuring rte_build_config.h using configuration 00:02:04.270 Message: 00:02:04.270 ================= 00:02:04.270 Applications Enabled 00:02:04.270 ================= 00:02:04.270 00:02:04.270 apps: 00:02:04.270 00:02:04.270 00:02:04.270 Message: 00:02:04.270 ================= 00:02:04.270 Libraries Enabled 00:02:04.270 ================= 00:02:04.270 00:02:04.270 libs: 00:02:04.270 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:04.270 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:04.271 cryptodev, dmadev, power, reorder, security, vhost, 00:02:04.271 00:02:04.271 Message: 00:02:04.271 =============== 00:02:04.271 Drivers Enabled 00:02:04.271 =============== 00:02:04.271 00:02:04.271 common: 00:02:04.271 00:02:04.271 bus: 00:02:04.271 pci, vdev, 00:02:04.271 mempool: 00:02:04.271 ring, 00:02:04.271 dma: 00:02:04.271 00:02:04.271 net: 00:02:04.271 00:02:04.271 crypto: 00:02:04.271 00:02:04.271 compress: 00:02:04.271 00:02:04.271 vdpa: 00:02:04.271 00:02:04.271 00:02:04.271 Message: 00:02:04.271 ================= 00:02:04.271 Content Skipped 00:02:04.271 ================= 00:02:04.271 00:02:04.271 apps: 00:02:04.271 dumpcap: explicitly disabled via build config 00:02:04.271 graph: explicitly disabled via build config 00:02:04.271 pdump: explicitly disabled via build config 00:02:04.271 proc-info: explicitly disabled via build config 00:02:04.271 test-acl: explicitly disabled via build config 00:02:04.271 test-bbdev: explicitly disabled via build config 00:02:04.271 test-cmdline: explicitly disabled via build config 00:02:04.271 test-compress-perf: explicitly disabled via build config 00:02:04.271 test-crypto-perf: explicitly disabled via build config 00:02:04.271 test-dma-perf: explicitly disabled via build config 00:02:04.271 test-eventdev: explicitly disabled via build config 00:02:04.271 test-fib: explicitly disabled via build config 00:02:04.271 test-flow-perf: explicitly disabled via build config 00:02:04.271 test-gpudev: explicitly disabled via build config 00:02:04.271 test-mldev: explicitly disabled via build config 00:02:04.271 test-pipeline: explicitly disabled via build config 00:02:04.271 test-pmd: explicitly disabled via build config 00:02:04.271 test-regex: explicitly disabled via build config 00:02:04.271 test-sad: explicitly disabled via build config 00:02:04.271 test-security-perf: explicitly disabled via build config 00:02:04.271 00:02:04.271 libs: 00:02:04.271 argparse: explicitly disabled via build config 00:02:04.271 metrics: explicitly disabled via build config 00:02:04.271 acl: explicitly disabled via build config 00:02:04.271 bbdev: explicitly disabled via build config 00:02:04.271 bitratestats: explicitly disabled via build config 00:02:04.271 bpf: explicitly disabled via build config 00:02:04.271 cfgfile: explicitly disabled via build config 00:02:04.271 distributor: explicitly disabled via build config 00:02:04.271 efd: explicitly disabled via build config 00:02:04.271 eventdev: explicitly disabled via build config 00:02:04.271 dispatcher: explicitly disabled via build config 00:02:04.271 gpudev: explicitly disabled via build config 00:02:04.271 gro: explicitly disabled via build config 00:02:04.271 gso: explicitly disabled via build config 00:02:04.271 ip_frag: explicitly disabled via build config 00:02:04.271 jobstats: explicitly disabled via build config 00:02:04.271 latencystats: explicitly disabled via build config 00:02:04.271 lpm: explicitly disabled via build config 00:02:04.271 member: explicitly disabled via build config 00:02:04.271 pcapng: explicitly disabled via build config 00:02:04.271 rawdev: explicitly disabled via build config 00:02:04.271 regexdev: explicitly disabled via build config 00:02:04.271 mldev: explicitly disabled via build config 00:02:04.271 rib: explicitly disabled via build config 00:02:04.271 sched: explicitly disabled via build config 00:02:04.271 stack: explicitly disabled via build config 00:02:04.271 ipsec: explicitly disabled via build config 00:02:04.271 pdcp: explicitly disabled via build config 00:02:04.271 fib: explicitly disabled via build config 00:02:04.271 port: explicitly disabled via build config 00:02:04.271 pdump: explicitly disabled via build config 00:02:04.271 table: explicitly disabled via build config 00:02:04.271 pipeline: explicitly disabled via build config 00:02:04.271 graph: explicitly disabled via build config 00:02:04.271 node: explicitly disabled via build config 00:02:04.271 00:02:04.271 drivers: 00:02:04.271 common/cpt: not in enabled drivers build config 00:02:04.271 common/dpaax: not in enabled drivers build config 00:02:04.271 common/iavf: not in enabled drivers build config 00:02:04.271 common/idpf: not in enabled drivers build config 00:02:04.271 common/ionic: not in enabled drivers build config 00:02:04.271 common/mvep: not in enabled drivers build config 00:02:04.271 common/octeontx: not in enabled drivers build config 00:02:04.271 bus/auxiliary: not in enabled drivers build config 00:02:04.271 bus/cdx: not in enabled drivers build config 00:02:04.271 bus/dpaa: not in enabled drivers build config 00:02:04.271 bus/fslmc: not in enabled drivers build config 00:02:04.271 bus/ifpga: not in enabled drivers build config 00:02:04.271 bus/platform: not in enabled drivers build config 00:02:04.271 bus/uacce: not in enabled drivers build config 00:02:04.271 bus/vmbus: not in enabled drivers build config 00:02:04.271 common/cnxk: not in enabled drivers build config 00:02:04.271 common/mlx5: not in enabled drivers build config 00:02:04.271 common/nfp: not in enabled drivers build config 00:02:04.271 common/nitrox: not in enabled drivers build config 00:02:04.271 common/qat: not in enabled drivers build config 00:02:04.271 common/sfc_efx: not in enabled drivers build config 00:02:04.271 mempool/bucket: not in enabled drivers build config 00:02:04.271 mempool/cnxk: not in enabled drivers build config 00:02:04.271 mempool/dpaa: not in enabled drivers build config 00:02:04.271 mempool/dpaa2: not in enabled drivers build config 00:02:04.271 mempool/octeontx: not in enabled drivers build config 00:02:04.271 mempool/stack: not in enabled drivers build config 00:02:04.271 dma/cnxk: not in enabled drivers build config 00:02:04.271 dma/dpaa: not in enabled drivers build config 00:02:04.271 dma/dpaa2: not in enabled drivers build config 00:02:04.271 dma/hisilicon: not in enabled drivers build config 00:02:04.271 dma/idxd: not in enabled drivers build config 00:02:04.271 dma/ioat: not in enabled drivers build config 00:02:04.271 dma/skeleton: not in enabled drivers build config 00:02:04.271 net/af_packet: not in enabled drivers build config 00:02:04.271 net/af_xdp: not in enabled drivers build config 00:02:04.271 net/ark: not in enabled drivers build config 00:02:04.271 net/atlantic: not in enabled drivers build config 00:02:04.271 net/avp: not in enabled drivers build config 00:02:04.271 net/axgbe: not in enabled drivers build config 00:02:04.271 net/bnx2x: not in enabled drivers build config 00:02:04.271 net/bnxt: not in enabled drivers build config 00:02:04.271 net/bonding: not in enabled drivers build config 00:02:04.271 net/cnxk: not in enabled drivers build config 00:02:04.271 net/cpfl: not in enabled drivers build config 00:02:04.271 net/cxgbe: not in enabled drivers build config 00:02:04.271 net/dpaa: not in enabled drivers build config 00:02:04.271 net/dpaa2: not in enabled drivers build config 00:02:04.271 net/e1000: not in enabled drivers build config 00:02:04.271 net/ena: not in enabled drivers build config 00:02:04.271 net/enetc: not in enabled drivers build config 00:02:04.271 net/enetfec: not in enabled drivers build config 00:02:04.271 net/enic: not in enabled drivers build config 00:02:04.271 net/failsafe: not in enabled drivers build config 00:02:04.271 net/fm10k: not in enabled drivers build config 00:02:04.271 net/gve: not in enabled drivers build config 00:02:04.271 net/hinic: not in enabled drivers build config 00:02:04.271 net/hns3: not in enabled drivers build config 00:02:04.271 net/i40e: not in enabled drivers build config 00:02:04.271 net/iavf: not in enabled drivers build config 00:02:04.271 net/ice: not in enabled drivers build config 00:02:04.271 net/idpf: not in enabled drivers build config 00:02:04.271 net/igc: not in enabled drivers build config 00:02:04.271 net/ionic: not in enabled drivers build config 00:02:04.271 net/ipn3ke: not in enabled drivers build config 00:02:04.271 net/ixgbe: not in enabled drivers build config 00:02:04.271 net/mana: not in enabled drivers build config 00:02:04.271 net/memif: not in enabled drivers build config 00:02:04.271 net/mlx4: not in enabled drivers build config 00:02:04.271 net/mlx5: not in enabled drivers build config 00:02:04.271 net/mvneta: not in enabled drivers build config 00:02:04.271 net/mvpp2: not in enabled drivers build config 00:02:04.271 net/netvsc: not in enabled drivers build config 00:02:04.271 net/nfb: not in enabled drivers build config 00:02:04.271 net/nfp: not in enabled drivers build config 00:02:04.271 net/ngbe: not in enabled drivers build config 00:02:04.271 net/null: not in enabled drivers build config 00:02:04.271 net/octeontx: not in enabled drivers build config 00:02:04.271 net/octeon_ep: not in enabled drivers build config 00:02:04.271 net/pcap: not in enabled drivers build config 00:02:04.271 net/pfe: not in enabled drivers build config 00:02:04.271 net/qede: not in enabled drivers build config 00:02:04.271 net/ring: not in enabled drivers build config 00:02:04.271 net/sfc: not in enabled drivers build config 00:02:04.271 net/softnic: not in enabled drivers build config 00:02:04.271 net/tap: not in enabled drivers build config 00:02:04.271 net/thunderx: not in enabled drivers build config 00:02:04.271 net/txgbe: not in enabled drivers build config 00:02:04.271 net/vdev_netvsc: not in enabled drivers build config 00:02:04.271 net/vhost: not in enabled drivers build config 00:02:04.271 net/virtio: not in enabled drivers build config 00:02:04.271 net/vmxnet3: not in enabled drivers build config 00:02:04.271 raw/*: missing internal dependency, "rawdev" 00:02:04.271 crypto/armv8: not in enabled drivers build config 00:02:04.271 crypto/bcmfs: not in enabled drivers build config 00:02:04.271 crypto/caam_jr: not in enabled drivers build config 00:02:04.271 crypto/ccp: not in enabled drivers build config 00:02:04.271 crypto/cnxk: not in enabled drivers build config 00:02:04.271 crypto/dpaa_sec: not in enabled drivers build config 00:02:04.271 crypto/dpaa2_sec: not in enabled drivers build config 00:02:04.271 crypto/ipsec_mb: not in enabled drivers build config 00:02:04.271 crypto/mlx5: not in enabled drivers build config 00:02:04.271 crypto/mvsam: not in enabled drivers build config 00:02:04.271 crypto/nitrox: not in enabled drivers build config 00:02:04.271 crypto/null: not in enabled drivers build config 00:02:04.271 crypto/octeontx: not in enabled drivers build config 00:02:04.271 crypto/openssl: not in enabled drivers build config 00:02:04.271 crypto/scheduler: not in enabled drivers build config 00:02:04.271 crypto/uadk: not in enabled drivers build config 00:02:04.271 crypto/virtio: not in enabled drivers build config 00:02:04.271 compress/isal: not in enabled drivers build config 00:02:04.271 compress/mlx5: not in enabled drivers build config 00:02:04.271 compress/nitrox: not in enabled drivers build config 00:02:04.272 compress/octeontx: not in enabled drivers build config 00:02:04.272 compress/zlib: not in enabled drivers build config 00:02:04.272 regex/*: missing internal dependency, "regexdev" 00:02:04.272 ml/*: missing internal dependency, "mldev" 00:02:04.272 vdpa/ifc: not in enabled drivers build config 00:02:04.272 vdpa/mlx5: not in enabled drivers build config 00:02:04.272 vdpa/nfp: not in enabled drivers build config 00:02:04.272 vdpa/sfc: not in enabled drivers build config 00:02:04.272 event/*: missing internal dependency, "eventdev" 00:02:04.272 baseband/*: missing internal dependency, "bbdev" 00:02:04.272 gpu/*: missing internal dependency, "gpudev" 00:02:04.272 00:02:04.272 00:02:04.272 Build targets in project: 85 00:02:04.272 00:02:04.272 DPDK 24.03.0 00:02:04.272 00:02:04.272 User defined options 00:02:04.272 buildtype : debug 00:02:04.272 default_library : shared 00:02:04.272 libdir : lib 00:02:04.272 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:04.272 b_sanitize : address 00:02:04.272 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:04.272 c_link_args : 00:02:04.272 cpu_instruction_set: native 00:02:04.272 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:04.272 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:04.272 enable_docs : false 00:02:04.272 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:04.272 enable_kmods : false 00:02:04.272 max_lcores : 128 00:02:04.272 tests : false 00:02:04.272 00:02:04.272 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:04.272 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:04.272 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:04.272 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:04.272 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:04.272 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:04.272 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:04.272 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:04.272 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:04.272 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:04.272 [9/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:04.272 [10/268] Linking static target lib/librte_kvargs.a 00:02:04.272 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:04.272 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:04.272 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:04.272 [14/268] Linking static target lib/librte_log.a 00:02:04.272 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:04.533 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:04.798 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.058 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:05.058 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:05.058 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:05.058 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:05.058 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:05.058 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:05.058 [24/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:05.058 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:05.058 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:05.058 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:05.058 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:05.325 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:05.325 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:05.325 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:05.325 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:05.325 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:05.325 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:05.325 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:05.325 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:05.325 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:05.325 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:05.325 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:05.325 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:05.325 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:05.325 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:05.325 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:05.325 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:05.325 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:05.325 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:05.325 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:05.325 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:05.325 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:05.325 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:05.325 [51/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:05.325 [52/268] Linking static target lib/librte_telemetry.a 00:02:05.325 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:05.325 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:05.325 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:05.325 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:05.325 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:05.586 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:05.586 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:05.586 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:05.586 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:05.586 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:05.586 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:05.848 [64/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.848 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:05.848 [66/268] Linking target lib/librte_log.so.24.1 00:02:05.848 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:06.112 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:06.112 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:06.112 [70/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:06.112 [71/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:06.112 [72/268] Linking static target lib/librte_pci.a 00:02:06.112 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:06.112 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:06.112 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:06.112 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:06.112 [77/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:06.112 [78/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:06.112 [79/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:06.112 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:06.375 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:06.375 [82/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:06.375 [83/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:06.376 [84/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:06.376 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:06.376 [86/268] Linking static target lib/librte_meter.a 00:02:06.376 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:06.376 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:06.376 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:06.376 [90/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:06.376 [91/268] Linking target lib/librte_kvargs.so.24.1 00:02:06.376 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:06.376 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:06.376 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:06.376 [95/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:06.376 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:06.376 [97/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:06.376 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:06.376 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:06.376 [100/268] Linking static target lib/librte_ring.a 00:02:06.376 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:06.376 [102/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:06.376 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:06.376 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:06.376 [105/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.376 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:06.376 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:06.376 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:06.639 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:06.639 [110/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:06.639 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:06.639 [112/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:06.639 [113/268] Linking target lib/librte_telemetry.so.24.1 00:02:06.639 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:06.639 [115/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:06.639 [116/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.639 [117/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:06.639 [118/268] Linking static target lib/librte_mempool.a 00:02:06.639 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:06.639 [120/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:06.902 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:06.902 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:06.902 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:06.902 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:06.902 [125/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:06.902 [126/268] Linking static target lib/librte_rcu.a 00:02:06.902 [127/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.902 [128/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:06.902 [129/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.902 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:06.902 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:06.902 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:06.902 [133/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:07.165 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:07.165 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:07.165 [136/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:07.165 [137/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:07.165 [138/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:07.165 [139/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:07.165 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:07.165 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:07.165 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:07.165 [143/268] Linking static target lib/librte_eal.a 00:02:07.165 [144/268] Linking static target lib/librte_cmdline.a 00:02:07.426 [145/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:07.426 [146/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:07.426 [147/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:07.426 [148/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.426 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:07.426 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:07.426 [151/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:07.689 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:07.689 [153/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:07.689 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:07.689 [155/268] Linking static target lib/librte_timer.a 00:02:07.689 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:07.689 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:07.690 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:07.690 [159/268] Linking static target lib/librte_dmadev.a 00:02:07.950 [160/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.950 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:07.950 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:07.950 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:07.950 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.950 [165/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:08.209 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:08.209 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:08.209 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:08.209 [169/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:08.209 [170/268] Linking static target lib/librte_net.a 00:02:08.209 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:08.209 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:08.209 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:08.209 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:08.209 [175/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:08.209 [176/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:08.209 [177/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:08.209 [178/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:08.209 [179/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:08.209 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:08.209 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.209 [182/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.209 [183/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:08.469 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:08.469 [185/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.469 [186/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:08.469 [187/268] Linking static target lib/librte_power.a 00:02:08.469 [188/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:08.469 [189/268] Linking static target lib/librte_hash.a 00:02:08.469 [190/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:08.469 [191/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:08.469 [192/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:08.469 [193/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:08.469 [194/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:08.469 [195/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:08.469 [196/268] Linking static target drivers/librte_bus_pci.a 00:02:08.469 [197/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:08.469 [198/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:08.469 [199/268] Linking static target drivers/librte_bus_vdev.a 00:02:08.729 [200/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:08.729 [201/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.729 [202/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.729 [203/268] Linking static target drivers/librte_mempool_ring.a 00:02:08.729 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:08.729 [205/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.729 [206/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:08.729 [207/268] Linking static target lib/librte_compressdev.a 00:02:08.988 [208/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.988 [209/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.988 [210/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:08.988 [211/268] Linking static target lib/librte_reorder.a 00:02:08.988 [212/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.988 [213/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:09.246 [214/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.246 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.505 [216/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:09.505 [217/268] Linking static target lib/librte_security.a 00:02:09.763 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.021 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:10.589 [220/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:10.589 [221/268] Linking static target lib/librte_mbuf.a 00:02:11.155 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:11.155 [223/268] Linking static target lib/librte_cryptodev.a 00:02:11.155 [224/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.090 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:12.090 [226/268] Linking static target lib/librte_ethdev.a 00:02:12.090 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.463 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.463 [229/268] Linking target lib/librte_eal.so.24.1 00:02:13.463 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:13.463 [231/268] Linking target lib/librte_dmadev.so.24.1 00:02:13.463 [232/268] Linking target lib/librte_meter.so.24.1 00:02:13.463 [233/268] Linking target lib/librte_pci.so.24.1 00:02:13.463 [234/268] Linking target lib/librte_ring.so.24.1 00:02:13.463 [235/268] Linking target lib/librte_timer.so.24.1 00:02:13.463 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:13.722 [237/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:13.722 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:13.722 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:13.722 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:13.722 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:13.722 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:13.722 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:13.722 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:13.722 [245/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:13.722 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:13.980 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:13.980 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:13.980 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:13.980 [250/268] Linking target lib/librte_reorder.so.24.1 00:02:13.980 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:13.980 [252/268] Linking target lib/librte_net.so.24.1 00:02:13.980 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:14.239 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:14.239 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:14.239 [256/268] Linking target lib/librte_cmdline.so.24.1 00:02:14.239 [257/268] Linking target lib/librte_security.so.24.1 00:02:14.239 [258/268] Linking target lib/librte_hash.so.24.1 00:02:14.497 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:15.064 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:16.000 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.000 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:16.258 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:16.258 [264/268] Linking target lib/librte_power.so.24.1 00:02:42.789 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:42.789 [266/268] Linking static target lib/librte_vhost.a 00:02:42.789 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.789 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:42.789 INFO: autodetecting backend as ninja 00:02:42.789 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:43.354 CC lib/log/log.o 00:02:43.354 CC lib/log/log_flags.o 00:02:43.354 CC lib/log/log_deprecated.o 00:02:43.354 CC lib/ut_mock/mock.o 00:02:43.354 CC lib/ut/ut.o 00:02:43.612 LIB libspdk_ut.a 00:02:43.612 LIB libspdk_ut_mock.a 00:02:43.612 LIB libspdk_log.a 00:02:43.612 SO libspdk_ut_mock.so.6.0 00:02:43.612 SO libspdk_ut.so.2.0 00:02:43.612 SO libspdk_log.so.7.1 00:02:43.612 SYMLINK libspdk_ut_mock.so 00:02:43.612 SYMLINK libspdk_ut.so 00:02:43.870 SYMLINK libspdk_log.so 00:02:43.870 CC lib/dma/dma.o 00:02:43.870 CC lib/ioat/ioat.o 00:02:43.870 CXX lib/trace_parser/trace.o 00:02:43.870 CC lib/util/base64.o 00:02:43.870 CC lib/util/bit_array.o 00:02:43.870 CC lib/util/cpuset.o 00:02:43.870 CC lib/util/crc16.o 00:02:43.870 CC lib/util/crc32.o 00:02:43.870 CC lib/util/crc32c.o 00:02:43.870 CC lib/util/crc32_ieee.o 00:02:43.870 CC lib/util/crc64.o 00:02:43.870 CC lib/util/dif.o 00:02:43.870 CC lib/util/fd.o 00:02:43.870 CC lib/util/fd_group.o 00:02:43.870 CC lib/util/file.o 00:02:43.870 CC lib/util/hexlify.o 00:02:43.870 CC lib/util/iov.o 00:02:43.870 CC lib/util/math.o 00:02:43.870 CC lib/util/net.o 00:02:43.870 CC lib/util/pipe.o 00:02:43.870 CC lib/util/strerror_tls.o 00:02:43.870 CC lib/util/string.o 00:02:43.870 CC lib/util/uuid.o 00:02:43.870 CC lib/util/xor.o 00:02:43.870 CC lib/util/zipf.o 00:02:43.870 CC lib/util/md5.o 00:02:43.870 CC lib/vfio_user/host/vfio_user_pci.o 00:02:44.129 CC lib/vfio_user/host/vfio_user.o 00:02:44.129 LIB libspdk_dma.a 00:02:44.129 SO libspdk_dma.so.5.0 00:02:44.129 SYMLINK libspdk_dma.so 00:02:44.389 LIB libspdk_vfio_user.a 00:02:44.389 SO libspdk_vfio_user.so.5.0 00:02:44.389 LIB libspdk_ioat.a 00:02:44.389 SO libspdk_ioat.so.7.0 00:02:44.389 SYMLINK libspdk_vfio_user.so 00:02:44.389 SYMLINK libspdk_ioat.so 00:02:44.647 LIB libspdk_util.a 00:02:44.906 SO libspdk_util.so.10.1 00:02:44.906 SYMLINK libspdk_util.so 00:02:45.165 CC lib/conf/conf.o 00:02:45.165 CC lib/json/json_parse.o 00:02:45.165 CC lib/rdma_utils/rdma_utils.o 00:02:45.165 CC lib/idxd/idxd.o 00:02:45.165 CC lib/env_dpdk/env.o 00:02:45.165 CC lib/vmd/vmd.o 00:02:45.165 CC lib/idxd/idxd_user.o 00:02:45.165 CC lib/json/json_util.o 00:02:45.165 CC lib/env_dpdk/memory.o 00:02:45.165 CC lib/vmd/led.o 00:02:45.165 CC lib/json/json_write.o 00:02:45.165 CC lib/idxd/idxd_kernel.o 00:02:45.165 CC lib/env_dpdk/pci.o 00:02:45.165 CC lib/env_dpdk/init.o 00:02:45.165 CC lib/env_dpdk/threads.o 00:02:45.165 CC lib/env_dpdk/pci_ioat.o 00:02:45.165 CC lib/env_dpdk/pci_virtio.o 00:02:45.165 CC lib/env_dpdk/pci_vmd.o 00:02:45.165 CC lib/env_dpdk/pci_idxd.o 00:02:45.165 CC lib/env_dpdk/pci_event.o 00:02:45.165 CC lib/env_dpdk/sigbus_handler.o 00:02:45.165 CC lib/env_dpdk/pci_dpdk.o 00:02:45.165 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:45.165 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:45.165 LIB libspdk_trace_parser.a 00:02:45.165 SO libspdk_trace_parser.so.6.0 00:02:45.423 SYMLINK libspdk_trace_parser.so 00:02:45.423 LIB libspdk_conf.a 00:02:45.423 SO libspdk_conf.so.6.0 00:02:45.423 LIB libspdk_rdma_utils.a 00:02:45.423 SYMLINK libspdk_conf.so 00:02:45.423 SO libspdk_rdma_utils.so.1.0 00:02:45.423 LIB libspdk_json.a 00:02:45.682 SO libspdk_json.so.6.0 00:02:45.682 SYMLINK libspdk_rdma_utils.so 00:02:45.682 SYMLINK libspdk_json.so 00:02:45.682 CC lib/rdma_provider/common.o 00:02:45.682 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:45.682 CC lib/jsonrpc/jsonrpc_server.o 00:02:45.682 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:45.682 CC lib/jsonrpc/jsonrpc_client.o 00:02:45.682 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:45.940 LIB libspdk_rdma_provider.a 00:02:45.940 LIB libspdk_idxd.a 00:02:45.940 SO libspdk_rdma_provider.so.7.0 00:02:45.940 SO libspdk_idxd.so.12.1 00:02:46.199 LIB libspdk_vmd.a 00:02:46.199 SYMLINK libspdk_rdma_provider.so 00:02:46.199 SO libspdk_vmd.so.6.0 00:02:46.199 SYMLINK libspdk_idxd.so 00:02:46.199 LIB libspdk_jsonrpc.a 00:02:46.199 SO libspdk_jsonrpc.so.6.0 00:02:46.199 SYMLINK libspdk_vmd.so 00:02:46.199 SYMLINK libspdk_jsonrpc.so 00:02:46.458 CC lib/rpc/rpc.o 00:02:46.717 LIB libspdk_rpc.a 00:02:46.717 SO libspdk_rpc.so.6.0 00:02:46.717 SYMLINK libspdk_rpc.so 00:02:46.717 CC lib/notify/notify.o 00:02:46.976 CC lib/notify/notify_rpc.o 00:02:46.976 CC lib/trace/trace.o 00:02:46.976 CC lib/trace/trace_flags.o 00:02:46.976 CC lib/keyring/keyring.o 00:02:46.976 CC lib/keyring/keyring_rpc.o 00:02:46.976 CC lib/trace/trace_rpc.o 00:02:46.976 LIB libspdk_notify.a 00:02:46.976 SO libspdk_notify.so.6.0 00:02:46.976 SYMLINK libspdk_notify.so 00:02:47.234 LIB libspdk_keyring.a 00:02:47.234 SO libspdk_keyring.so.2.0 00:02:47.234 LIB libspdk_trace.a 00:02:47.234 SO libspdk_trace.so.11.0 00:02:47.234 SYMLINK libspdk_keyring.so 00:02:47.234 SYMLINK libspdk_trace.so 00:02:47.492 CC lib/thread/thread.o 00:02:47.492 CC lib/thread/iobuf.o 00:02:47.492 CC lib/sock/sock.o 00:02:47.492 CC lib/sock/sock_rpc.o 00:02:48.059 LIB libspdk_sock.a 00:02:48.059 SO libspdk_sock.so.10.0 00:02:48.059 SYMLINK libspdk_sock.so 00:02:48.059 LIB libspdk_env_dpdk.a 00:02:48.059 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:48.059 CC lib/nvme/nvme_ctrlr.o 00:02:48.059 CC lib/nvme/nvme_fabric.o 00:02:48.059 CC lib/nvme/nvme_ns_cmd.o 00:02:48.059 CC lib/nvme/nvme_ns.o 00:02:48.059 CC lib/nvme/nvme_pcie_common.o 00:02:48.059 CC lib/nvme/nvme_pcie.o 00:02:48.059 CC lib/nvme/nvme_qpair.o 00:02:48.059 CC lib/nvme/nvme.o 00:02:48.059 CC lib/nvme/nvme_quirks.o 00:02:48.059 CC lib/nvme/nvme_transport.o 00:02:48.059 CC lib/nvme/nvme_discovery.o 00:02:48.059 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:48.059 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:48.059 CC lib/nvme/nvme_tcp.o 00:02:48.059 CC lib/nvme/nvme_opal.o 00:02:48.059 CC lib/nvme/nvme_io_msg.o 00:02:48.059 CC lib/nvme/nvme_poll_group.o 00:02:48.059 CC lib/nvme/nvme_zns.o 00:02:48.059 CC lib/nvme/nvme_stubs.o 00:02:48.059 CC lib/nvme/nvme_auth.o 00:02:48.059 CC lib/nvme/nvme_cuse.o 00:02:48.059 CC lib/nvme/nvme_rdma.o 00:02:48.317 SO libspdk_env_dpdk.so.15.1 00:02:48.317 SYMLINK libspdk_env_dpdk.so 00:02:49.691 LIB libspdk_thread.a 00:02:49.691 SO libspdk_thread.so.11.0 00:02:49.691 SYMLINK libspdk_thread.so 00:02:49.691 CC lib/virtio/virtio.o 00:02:49.691 CC lib/fsdev/fsdev.o 00:02:49.691 CC lib/blob/blobstore.o 00:02:49.691 CC lib/accel/accel.o 00:02:49.691 CC lib/init/json_config.o 00:02:49.691 CC lib/fsdev/fsdev_io.o 00:02:49.691 CC lib/blob/request.o 00:02:49.691 CC lib/accel/accel_rpc.o 00:02:49.691 CC lib/virtio/virtio_vhost_user.o 00:02:49.691 CC lib/init/subsystem.o 00:02:49.691 CC lib/fsdev/fsdev_rpc.o 00:02:49.691 CC lib/blob/zeroes.o 00:02:49.691 CC lib/init/subsystem_rpc.o 00:02:49.691 CC lib/blob/blob_bs_dev.o 00:02:49.691 CC lib/virtio/virtio_pci.o 00:02:49.691 CC lib/init/rpc.o 00:02:49.691 CC lib/virtio/virtio_vfio_user.o 00:02:49.691 CC lib/accel/accel_sw.o 00:02:50.257 LIB libspdk_init.a 00:02:50.257 SO libspdk_init.so.6.0 00:02:50.257 SYMLINK libspdk_init.so 00:02:50.257 LIB libspdk_virtio.a 00:02:50.257 SO libspdk_virtio.so.7.0 00:02:50.257 SYMLINK libspdk_virtio.so 00:02:50.257 CC lib/event/app.o 00:02:50.257 CC lib/event/reactor.o 00:02:50.257 CC lib/event/log_rpc.o 00:02:50.257 CC lib/event/app_rpc.o 00:02:50.257 CC lib/event/scheduler_static.o 00:02:50.515 LIB libspdk_fsdev.a 00:02:50.774 SO libspdk_fsdev.so.2.0 00:02:50.774 SYMLINK libspdk_fsdev.so 00:02:50.774 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:51.032 LIB libspdk_event.a 00:02:51.032 SO libspdk_event.so.14.0 00:02:51.032 SYMLINK libspdk_event.so 00:02:51.290 LIB libspdk_nvme.a 00:02:51.290 LIB libspdk_accel.a 00:02:51.290 SO libspdk_accel.so.16.0 00:02:51.290 SO libspdk_nvme.so.15.0 00:02:51.290 SYMLINK libspdk_accel.so 00:02:51.548 CC lib/bdev/bdev.o 00:02:51.548 CC lib/bdev/bdev_rpc.o 00:02:51.548 CC lib/bdev/bdev_zone.o 00:02:51.548 CC lib/bdev/part.o 00:02:51.548 CC lib/bdev/scsi_nvme.o 00:02:51.548 SYMLINK libspdk_nvme.so 00:02:51.806 LIB libspdk_fuse_dispatcher.a 00:02:51.806 SO libspdk_fuse_dispatcher.so.1.0 00:02:51.806 SYMLINK libspdk_fuse_dispatcher.so 00:02:54.332 LIB libspdk_blob.a 00:02:54.332 SO libspdk_blob.so.12.0 00:02:54.332 SYMLINK libspdk_blob.so 00:02:54.332 CC lib/lvol/lvol.o 00:02:54.332 CC lib/blobfs/blobfs.o 00:02:54.332 CC lib/blobfs/tree.o 00:02:55.308 LIB libspdk_bdev.a 00:02:55.308 SO libspdk_bdev.so.17.0 00:02:55.308 SYMLINK libspdk_bdev.so 00:02:55.308 CC lib/nbd/nbd.o 00:02:55.308 CC lib/ublk/ublk.o 00:02:55.308 CC lib/nbd/nbd_rpc.o 00:02:55.308 CC lib/ublk/ublk_rpc.o 00:02:55.308 CC lib/nvmf/ctrlr.o 00:02:55.308 CC lib/nvmf/ctrlr_discovery.o 00:02:55.308 CC lib/nvmf/ctrlr_bdev.o 00:02:55.308 CC lib/scsi/dev.o 00:02:55.308 CC lib/nvmf/subsystem.o 00:02:55.308 CC lib/scsi/lun.o 00:02:55.308 CC lib/nvmf/nvmf.o 00:02:55.308 CC lib/nvmf/nvmf_rpc.o 00:02:55.308 CC lib/ftl/ftl_core.o 00:02:55.308 CC lib/nvmf/transport.o 00:02:55.308 CC lib/scsi/port.o 00:02:55.308 CC lib/scsi/scsi.o 00:02:55.308 CC lib/nvmf/tcp.o 00:02:55.308 CC lib/ftl/ftl_init.o 00:02:55.308 CC lib/scsi/scsi_bdev.o 00:02:55.308 CC lib/ftl/ftl_layout.o 00:02:55.308 CC lib/scsi/scsi_pr.o 00:02:55.308 CC lib/nvmf/stubs.o 00:02:55.308 CC lib/scsi/scsi_rpc.o 00:02:55.308 CC lib/ftl/ftl_debug.o 00:02:55.308 CC lib/ftl/ftl_io.o 00:02:55.308 CC lib/scsi/task.o 00:02:55.308 CC lib/ftl/ftl_sb.o 00:02:55.308 CC lib/nvmf/mdns_server.o 00:02:55.308 CC lib/nvmf/auth.o 00:02:55.308 CC lib/nvmf/rdma.o 00:02:55.308 CC lib/ftl/ftl_l2p.o 00:02:55.308 CC lib/ftl/ftl_l2p_flat.o 00:02:55.308 CC lib/ftl/ftl_nv_cache.o 00:02:55.308 CC lib/ftl/ftl_band.o 00:02:55.308 CC lib/ftl/ftl_band_ops.o 00:02:55.308 CC lib/ftl/ftl_rq.o 00:02:55.308 CC lib/ftl/ftl_writer.o 00:02:55.308 CC lib/ftl/ftl_reloc.o 00:02:55.308 CC lib/ftl/ftl_l2p_cache.o 00:02:55.308 CC lib/ftl/ftl_p2l.o 00:02:55.308 CC lib/ftl/ftl_p2l_log.o 00:02:55.308 CC lib/ftl/mngt/ftl_mngt.o 00:02:55.308 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:55.308 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:55.308 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:55.308 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:55.635 LIB libspdk_blobfs.a 00:02:55.635 SO libspdk_blobfs.so.11.0 00:02:55.916 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:55.916 LIB libspdk_lvol.a 00:02:55.916 SYMLINK libspdk_blobfs.so 00:02:55.916 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:55.916 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:55.916 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:55.916 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:55.916 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:55.916 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:55.916 SO libspdk_lvol.so.11.0 00:02:55.916 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:55.916 CC lib/ftl/utils/ftl_conf.o 00:02:55.916 CC lib/ftl/utils/ftl_md.o 00:02:55.916 CC lib/ftl/utils/ftl_mempool.o 00:02:55.916 CC lib/ftl/utils/ftl_bitmap.o 00:02:55.916 CC lib/ftl/utils/ftl_property.o 00:02:55.916 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:55.916 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:55.916 SYMLINK libspdk_lvol.so 00:02:55.916 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:55.916 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:55.916 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:55.916 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:55.916 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:56.232 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:56.232 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:56.232 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:56.232 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:56.232 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:56.232 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:56.232 CC lib/ftl/base/ftl_base_dev.o 00:02:56.232 CC lib/ftl/base/ftl_base_bdev.o 00:02:56.232 CC lib/ftl/ftl_trace.o 00:02:56.492 LIB libspdk_nbd.a 00:02:56.492 SO libspdk_nbd.so.7.0 00:02:56.492 SYMLINK libspdk_nbd.so 00:02:56.492 LIB libspdk_scsi.a 00:02:56.492 SO libspdk_scsi.so.9.0 00:02:56.751 SYMLINK libspdk_scsi.so 00:02:56.751 LIB libspdk_ublk.a 00:02:56.751 CC lib/vhost/vhost.o 00:02:56.751 CC lib/iscsi/conn.o 00:02:56.751 CC lib/iscsi/init_grp.o 00:02:56.751 CC lib/vhost/vhost_rpc.o 00:02:56.751 CC lib/iscsi/iscsi.o 00:02:56.751 CC lib/vhost/vhost_scsi.o 00:02:56.751 CC lib/iscsi/param.o 00:02:56.751 CC lib/vhost/vhost_blk.o 00:02:56.752 CC lib/iscsi/portal_grp.o 00:02:56.752 CC lib/iscsi/tgt_node.o 00:02:56.752 CC lib/vhost/rte_vhost_user.o 00:02:56.752 CC lib/iscsi/iscsi_subsystem.o 00:02:56.752 CC lib/iscsi/iscsi_rpc.o 00:02:56.752 CC lib/iscsi/task.o 00:02:56.752 SO libspdk_ublk.so.3.0 00:02:57.011 SYMLINK libspdk_ublk.so 00:02:57.270 LIB libspdk_ftl.a 00:02:57.270 SO libspdk_ftl.so.9.0 00:02:57.837 SYMLINK libspdk_ftl.so 00:02:58.404 LIB libspdk_vhost.a 00:02:58.404 SO libspdk_vhost.so.8.0 00:02:58.404 SYMLINK libspdk_vhost.so 00:02:58.662 LIB libspdk_iscsi.a 00:02:58.662 SO libspdk_iscsi.so.8.0 00:02:58.920 SYMLINK libspdk_iscsi.so 00:02:58.920 LIB libspdk_nvmf.a 00:02:59.180 SO libspdk_nvmf.so.20.0 00:02:59.180 SYMLINK libspdk_nvmf.so 00:02:59.438 CC module/env_dpdk/env_dpdk_rpc.o 00:02:59.697 CC module/keyring/file/keyring.o 00:02:59.697 CC module/accel/dsa/accel_dsa.o 00:02:59.697 CC module/accel/dsa/accel_dsa_rpc.o 00:02:59.697 CC module/accel/error/accel_error.o 00:02:59.697 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:59.697 CC module/accel/ioat/accel_ioat.o 00:02:59.697 CC module/keyring/linux/keyring.o 00:02:59.697 CC module/keyring/linux/keyring_rpc.o 00:02:59.697 CC module/sock/posix/posix.o 00:02:59.697 CC module/accel/iaa/accel_iaa.o 00:02:59.697 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:59.697 CC module/fsdev/aio/fsdev_aio.o 00:02:59.697 CC module/accel/ioat/accel_ioat_rpc.o 00:02:59.697 CC module/accel/error/accel_error_rpc.o 00:02:59.697 CC module/accel/iaa/accel_iaa_rpc.o 00:02:59.697 CC module/keyring/file/keyring_rpc.o 00:02:59.697 CC module/blob/bdev/blob_bdev.o 00:02:59.697 CC module/scheduler/gscheduler/gscheduler.o 00:02:59.697 CC module/fsdev/aio/linux_aio_mgr.o 00:02:59.697 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:59.697 LIB libspdk_env_dpdk_rpc.a 00:02:59.697 SO libspdk_env_dpdk_rpc.so.6.0 00:02:59.697 SYMLINK libspdk_env_dpdk_rpc.so 00:02:59.955 LIB libspdk_keyring_linux.a 00:02:59.955 LIB libspdk_keyring_file.a 00:02:59.955 LIB libspdk_scheduler_gscheduler.a 00:02:59.955 LIB libspdk_scheduler_dpdk_governor.a 00:02:59.955 SO libspdk_keyring_linux.so.1.0 00:02:59.955 SO libspdk_keyring_file.so.2.0 00:02:59.955 SO libspdk_scheduler_gscheduler.so.4.0 00:02:59.955 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:59.955 LIB libspdk_accel_ioat.a 00:02:59.955 LIB libspdk_scheduler_dynamic.a 00:02:59.955 SYMLINK libspdk_keyring_linux.so 00:02:59.955 LIB libspdk_accel_iaa.a 00:02:59.955 SYMLINK libspdk_keyring_file.so 00:02:59.955 SYMLINK libspdk_scheduler_gscheduler.so 00:02:59.955 LIB libspdk_accel_error.a 00:02:59.955 SO libspdk_scheduler_dynamic.so.4.0 00:02:59.955 SO libspdk_accel_ioat.so.6.0 00:02:59.955 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:59.955 SO libspdk_accel_error.so.2.0 00:02:59.955 SO libspdk_accel_iaa.so.3.0 00:02:59.955 SYMLINK libspdk_scheduler_dynamic.so 00:02:59.955 SYMLINK libspdk_accel_ioat.so 00:02:59.955 SYMLINK libspdk_accel_error.so 00:02:59.955 SYMLINK libspdk_accel_iaa.so 00:02:59.955 LIB libspdk_blob_bdev.a 00:02:59.955 LIB libspdk_accel_dsa.a 00:02:59.955 SO libspdk_blob_bdev.so.12.0 00:02:59.955 SO libspdk_accel_dsa.so.5.0 00:03:00.214 SYMLINK libspdk_blob_bdev.so 00:03:00.214 SYMLINK libspdk_accel_dsa.so 00:03:00.214 CC module/blobfs/bdev/blobfs_bdev.o 00:03:00.214 CC module/bdev/lvol/vbdev_lvol.o 00:03:00.214 CC module/bdev/error/vbdev_error.o 00:03:00.473 CC module/bdev/null/bdev_null.o 00:03:00.473 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:00.473 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:00.473 CC module/bdev/null/bdev_null_rpc.o 00:03:00.473 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:00.473 CC module/bdev/delay/vbdev_delay.o 00:03:00.473 CC module/bdev/error/vbdev_error_rpc.o 00:03:00.473 CC module/bdev/gpt/gpt.o 00:03:00.473 CC module/bdev/malloc/bdev_malloc.o 00:03:00.473 CC module/bdev/gpt/vbdev_gpt.o 00:03:00.474 CC module/bdev/aio/bdev_aio.o 00:03:00.474 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:00.474 CC module/bdev/aio/bdev_aio_rpc.o 00:03:00.474 CC module/bdev/passthru/vbdev_passthru.o 00:03:00.474 CC module/bdev/split/vbdev_split.o 00:03:00.474 CC module/bdev/nvme/bdev_nvme.o 00:03:00.474 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:00.474 CC module/bdev/split/vbdev_split_rpc.o 00:03:00.474 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:00.474 CC module/bdev/nvme/nvme_rpc.o 00:03:00.474 CC module/bdev/nvme/bdev_mdns_client.o 00:03:00.474 CC module/bdev/iscsi/bdev_iscsi.o 00:03:00.474 CC module/bdev/raid/bdev_raid.o 00:03:00.474 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:00.474 CC module/bdev/nvme/vbdev_opal.o 00:03:00.474 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:00.474 CC module/bdev/raid/bdev_raid_rpc.o 00:03:00.474 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:00.474 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:00.474 CC module/bdev/raid/bdev_raid_sb.o 00:03:00.474 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:00.474 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:00.474 CC module/bdev/raid/raid0.o 00:03:00.474 CC module/bdev/ftl/bdev_ftl.o 00:03:00.474 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:00.474 CC module/bdev/raid/raid1.o 00:03:00.474 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:00.474 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:00.474 CC module/bdev/raid/concat.o 00:03:00.733 LIB libspdk_blobfs_bdev.a 00:03:00.733 LIB libspdk_bdev_error.a 00:03:00.733 SO libspdk_blobfs_bdev.so.6.0 00:03:00.733 LIB libspdk_fsdev_aio.a 00:03:00.733 SO libspdk_bdev_error.so.6.0 00:03:00.733 LIB libspdk_bdev_gpt.a 00:03:00.733 LIB libspdk_sock_posix.a 00:03:00.733 LIB libspdk_bdev_split.a 00:03:00.733 SO libspdk_bdev_gpt.so.6.0 00:03:00.733 SO libspdk_fsdev_aio.so.1.0 00:03:00.733 SO libspdk_sock_posix.so.6.0 00:03:00.733 SYMLINK libspdk_bdev_error.so 00:03:00.733 SO libspdk_bdev_split.so.6.0 00:03:00.733 SYMLINK libspdk_blobfs_bdev.so 00:03:00.993 SYMLINK libspdk_bdev_gpt.so 00:03:00.993 SYMLINK libspdk_fsdev_aio.so 00:03:00.993 SYMLINK libspdk_bdev_split.so 00:03:00.993 SYMLINK libspdk_sock_posix.so 00:03:00.993 LIB libspdk_bdev_ftl.a 00:03:00.993 LIB libspdk_bdev_null.a 00:03:00.993 SO libspdk_bdev_ftl.so.6.0 00:03:00.993 SO libspdk_bdev_null.so.6.0 00:03:00.993 LIB libspdk_bdev_iscsi.a 00:03:00.993 LIB libspdk_bdev_malloc.a 00:03:00.993 LIB libspdk_bdev_zone_block.a 00:03:00.993 SYMLINK libspdk_bdev_ftl.so 00:03:00.993 SO libspdk_bdev_iscsi.so.6.0 00:03:00.993 SO libspdk_bdev_malloc.so.6.0 00:03:00.993 SO libspdk_bdev_zone_block.so.6.0 00:03:00.993 SYMLINK libspdk_bdev_null.so 00:03:00.993 LIB libspdk_bdev_passthru.a 00:03:00.993 LIB libspdk_bdev_aio.a 00:03:00.993 SO libspdk_bdev_passthru.so.6.0 00:03:00.993 SO libspdk_bdev_aio.so.6.0 00:03:00.993 SYMLINK libspdk_bdev_iscsi.so 00:03:00.993 LIB libspdk_bdev_delay.a 00:03:00.993 SYMLINK libspdk_bdev_zone_block.so 00:03:00.993 SYMLINK libspdk_bdev_malloc.so 00:03:00.993 SO libspdk_bdev_delay.so.6.0 00:03:00.993 SYMLINK libspdk_bdev_passthru.so 00:03:01.252 SYMLINK libspdk_bdev_aio.so 00:03:01.252 SYMLINK libspdk_bdev_delay.so 00:03:01.252 LIB libspdk_bdev_lvol.a 00:03:01.252 LIB libspdk_bdev_virtio.a 00:03:01.252 SO libspdk_bdev_lvol.so.6.0 00:03:01.252 SO libspdk_bdev_virtio.so.6.0 00:03:01.252 SYMLINK libspdk_bdev_lvol.so 00:03:01.252 SYMLINK libspdk_bdev_virtio.so 00:03:01.818 LIB libspdk_bdev_raid.a 00:03:01.818 SO libspdk_bdev_raid.so.6.0 00:03:02.077 SYMLINK libspdk_bdev_raid.so 00:03:03.974 LIB libspdk_bdev_nvme.a 00:03:03.974 SO libspdk_bdev_nvme.so.7.1 00:03:04.232 SYMLINK libspdk_bdev_nvme.so 00:03:04.489 CC module/event/subsystems/iobuf/iobuf.o 00:03:04.489 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:04.489 CC module/event/subsystems/fsdev/fsdev.o 00:03:04.489 CC module/event/subsystems/keyring/keyring.o 00:03:04.489 CC module/event/subsystems/vmd/vmd.o 00:03:04.489 CC module/event/subsystems/scheduler/scheduler.o 00:03:04.489 CC module/event/subsystems/sock/sock.o 00:03:04.489 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:04.489 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:04.489 LIB libspdk_event_keyring.a 00:03:04.489 LIB libspdk_event_vhost_blk.a 00:03:04.489 LIB libspdk_event_scheduler.a 00:03:04.489 LIB libspdk_event_fsdev.a 00:03:04.489 LIB libspdk_event_vmd.a 00:03:04.489 LIB libspdk_event_sock.a 00:03:04.747 SO libspdk_event_keyring.so.1.0 00:03:04.747 SO libspdk_event_vhost_blk.so.3.0 00:03:04.747 LIB libspdk_event_iobuf.a 00:03:04.747 SO libspdk_event_scheduler.so.4.0 00:03:04.747 SO libspdk_event_fsdev.so.1.0 00:03:04.747 SO libspdk_event_sock.so.5.0 00:03:04.747 SO libspdk_event_vmd.so.6.0 00:03:04.747 SO libspdk_event_iobuf.so.3.0 00:03:04.747 SYMLINK libspdk_event_keyring.so 00:03:04.747 SYMLINK libspdk_event_vhost_blk.so 00:03:04.747 SYMLINK libspdk_event_scheduler.so 00:03:04.747 SYMLINK libspdk_event_fsdev.so 00:03:04.747 SYMLINK libspdk_event_sock.so 00:03:04.747 SYMLINK libspdk_event_vmd.so 00:03:04.747 SYMLINK libspdk_event_iobuf.so 00:03:05.004 CC module/event/subsystems/accel/accel.o 00:03:05.004 LIB libspdk_event_accel.a 00:03:05.004 SO libspdk_event_accel.so.6.0 00:03:05.004 SYMLINK libspdk_event_accel.so 00:03:05.263 CC module/event/subsystems/bdev/bdev.o 00:03:05.521 LIB libspdk_event_bdev.a 00:03:05.521 SO libspdk_event_bdev.so.6.0 00:03:05.521 SYMLINK libspdk_event_bdev.so 00:03:05.779 CC module/event/subsystems/scsi/scsi.o 00:03:05.779 CC module/event/subsystems/nbd/nbd.o 00:03:05.779 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:05.779 CC module/event/subsystems/ublk/ublk.o 00:03:05.779 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:05.779 LIB libspdk_event_nbd.a 00:03:05.779 LIB libspdk_event_ublk.a 00:03:05.779 SO libspdk_event_nbd.so.6.0 00:03:05.779 SO libspdk_event_ublk.so.3.0 00:03:05.779 LIB libspdk_event_scsi.a 00:03:05.779 SO libspdk_event_scsi.so.6.0 00:03:06.037 SYMLINK libspdk_event_nbd.so 00:03:06.037 SYMLINK libspdk_event_ublk.so 00:03:06.037 SYMLINK libspdk_event_scsi.so 00:03:06.037 LIB libspdk_event_nvmf.a 00:03:06.037 SO libspdk_event_nvmf.so.6.0 00:03:06.037 SYMLINK libspdk_event_nvmf.so 00:03:06.037 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:06.037 CC module/event/subsystems/iscsi/iscsi.o 00:03:06.295 LIB libspdk_event_vhost_scsi.a 00:03:06.296 SO libspdk_event_vhost_scsi.so.3.0 00:03:06.296 LIB libspdk_event_iscsi.a 00:03:06.296 SO libspdk_event_iscsi.so.6.0 00:03:06.296 SYMLINK libspdk_event_vhost_scsi.so 00:03:06.296 SYMLINK libspdk_event_iscsi.so 00:03:06.556 SO libspdk.so.6.0 00:03:06.556 SYMLINK libspdk.so 00:03:06.556 CC app/trace_record/trace_record.o 00:03:06.556 CXX app/trace/trace.o 00:03:06.556 CC app/spdk_top/spdk_top.o 00:03:06.556 CC test/rpc_client/rpc_client_test.o 00:03:06.556 TEST_HEADER include/spdk/accel.h 00:03:06.556 TEST_HEADER include/spdk/accel_module.h 00:03:06.556 CC app/spdk_lspci/spdk_lspci.o 00:03:06.556 TEST_HEADER include/spdk/assert.h 00:03:06.556 TEST_HEADER include/spdk/barrier.h 00:03:06.556 CC app/spdk_nvme_identify/identify.o 00:03:06.556 TEST_HEADER include/spdk/base64.h 00:03:06.556 CC app/spdk_nvme_discover/discovery_aer.o 00:03:06.556 TEST_HEADER include/spdk/bdev.h 00:03:06.556 TEST_HEADER include/spdk/bdev_module.h 00:03:06.556 TEST_HEADER include/spdk/bdev_zone.h 00:03:06.556 TEST_HEADER include/spdk/bit_array.h 00:03:06.556 TEST_HEADER include/spdk/bit_pool.h 00:03:06.556 TEST_HEADER include/spdk/blob_bdev.h 00:03:06.556 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:06.556 CC app/spdk_nvme_perf/perf.o 00:03:06.556 TEST_HEADER include/spdk/blobfs.h 00:03:06.556 TEST_HEADER include/spdk/blob.h 00:03:06.556 TEST_HEADER include/spdk/conf.h 00:03:06.556 TEST_HEADER include/spdk/config.h 00:03:06.556 TEST_HEADER include/spdk/cpuset.h 00:03:06.556 TEST_HEADER include/spdk/crc16.h 00:03:06.556 TEST_HEADER include/spdk/crc32.h 00:03:06.556 TEST_HEADER include/spdk/crc64.h 00:03:06.556 TEST_HEADER include/spdk/dif.h 00:03:06.556 TEST_HEADER include/spdk/dma.h 00:03:06.556 TEST_HEADER include/spdk/env_dpdk.h 00:03:06.556 TEST_HEADER include/spdk/endian.h 00:03:06.556 TEST_HEADER include/spdk/env.h 00:03:06.556 TEST_HEADER include/spdk/event.h 00:03:06.556 TEST_HEADER include/spdk/fd_group.h 00:03:06.556 TEST_HEADER include/spdk/fd.h 00:03:06.556 TEST_HEADER include/spdk/file.h 00:03:06.556 TEST_HEADER include/spdk/fsdev.h 00:03:06.556 TEST_HEADER include/spdk/fsdev_module.h 00:03:06.556 TEST_HEADER include/spdk/ftl.h 00:03:06.556 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:06.556 TEST_HEADER include/spdk/hexlify.h 00:03:06.556 TEST_HEADER include/spdk/gpt_spec.h 00:03:06.556 TEST_HEADER include/spdk/histogram_data.h 00:03:06.556 TEST_HEADER include/spdk/idxd.h 00:03:06.556 TEST_HEADER include/spdk/idxd_spec.h 00:03:06.556 TEST_HEADER include/spdk/init.h 00:03:06.556 TEST_HEADER include/spdk/ioat.h 00:03:06.556 TEST_HEADER include/spdk/ioat_spec.h 00:03:06.556 TEST_HEADER include/spdk/iscsi_spec.h 00:03:06.556 TEST_HEADER include/spdk/jsonrpc.h 00:03:06.556 TEST_HEADER include/spdk/json.h 00:03:06.822 TEST_HEADER include/spdk/keyring.h 00:03:06.822 TEST_HEADER include/spdk/keyring_module.h 00:03:06.822 TEST_HEADER include/spdk/likely.h 00:03:06.822 TEST_HEADER include/spdk/log.h 00:03:06.822 TEST_HEADER include/spdk/lvol.h 00:03:06.822 TEST_HEADER include/spdk/md5.h 00:03:06.822 TEST_HEADER include/spdk/memory.h 00:03:06.822 TEST_HEADER include/spdk/mmio.h 00:03:06.822 TEST_HEADER include/spdk/nbd.h 00:03:06.822 TEST_HEADER include/spdk/net.h 00:03:06.822 TEST_HEADER include/spdk/notify.h 00:03:06.822 TEST_HEADER include/spdk/nvme.h 00:03:06.822 TEST_HEADER include/spdk/nvme_intel.h 00:03:06.822 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:06.822 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:06.822 TEST_HEADER include/spdk/nvme_spec.h 00:03:06.822 TEST_HEADER include/spdk/nvme_zns.h 00:03:06.822 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:06.822 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:06.822 TEST_HEADER include/spdk/nvmf.h 00:03:06.822 TEST_HEADER include/spdk/nvmf_spec.h 00:03:06.822 TEST_HEADER include/spdk/nvmf_transport.h 00:03:06.822 TEST_HEADER include/spdk/opal.h 00:03:06.822 TEST_HEADER include/spdk/opal_spec.h 00:03:06.822 TEST_HEADER include/spdk/pci_ids.h 00:03:06.822 TEST_HEADER include/spdk/queue.h 00:03:06.822 TEST_HEADER include/spdk/pipe.h 00:03:06.822 TEST_HEADER include/spdk/reduce.h 00:03:06.822 TEST_HEADER include/spdk/rpc.h 00:03:06.822 TEST_HEADER include/spdk/scheduler.h 00:03:06.822 TEST_HEADER include/spdk/scsi.h 00:03:06.822 TEST_HEADER include/spdk/scsi_spec.h 00:03:06.822 TEST_HEADER include/spdk/sock.h 00:03:06.822 TEST_HEADER include/spdk/stdinc.h 00:03:06.822 TEST_HEADER include/spdk/string.h 00:03:06.822 TEST_HEADER include/spdk/trace.h 00:03:06.822 TEST_HEADER include/spdk/thread.h 00:03:06.822 TEST_HEADER include/spdk/trace_parser.h 00:03:06.822 TEST_HEADER include/spdk/ublk.h 00:03:06.822 TEST_HEADER include/spdk/tree.h 00:03:06.822 TEST_HEADER include/spdk/util.h 00:03:06.822 TEST_HEADER include/spdk/uuid.h 00:03:06.822 TEST_HEADER include/spdk/version.h 00:03:06.822 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:06.822 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:06.822 TEST_HEADER include/spdk/vhost.h 00:03:06.822 TEST_HEADER include/spdk/xor.h 00:03:06.822 TEST_HEADER include/spdk/vmd.h 00:03:06.822 TEST_HEADER include/spdk/zipf.h 00:03:06.822 CXX test/cpp_headers/accel.o 00:03:06.822 CXX test/cpp_headers/accel_module.o 00:03:06.822 CXX test/cpp_headers/assert.o 00:03:06.822 CXX test/cpp_headers/barrier.o 00:03:06.822 CXX test/cpp_headers/base64.o 00:03:06.822 CXX test/cpp_headers/bdev.o 00:03:06.822 CC app/spdk_dd/spdk_dd.o 00:03:06.822 CXX test/cpp_headers/bdev_module.o 00:03:06.822 CXX test/cpp_headers/bdev_zone.o 00:03:06.822 CXX test/cpp_headers/bit_array.o 00:03:06.822 CXX test/cpp_headers/bit_pool.o 00:03:06.822 CXX test/cpp_headers/blob_bdev.o 00:03:06.822 CXX test/cpp_headers/blobfs_bdev.o 00:03:06.822 CXX test/cpp_headers/blobfs.o 00:03:06.822 CXX test/cpp_headers/blob.o 00:03:06.822 CXX test/cpp_headers/conf.o 00:03:06.822 CXX test/cpp_headers/config.o 00:03:06.822 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:06.822 CXX test/cpp_headers/cpuset.o 00:03:06.822 CXX test/cpp_headers/crc16.o 00:03:06.822 CC app/iscsi_tgt/iscsi_tgt.o 00:03:06.822 CC app/nvmf_tgt/nvmf_main.o 00:03:06.822 CC app/spdk_tgt/spdk_tgt.o 00:03:06.822 CXX test/cpp_headers/crc32.o 00:03:06.822 CC test/env/pci/pci_ut.o 00:03:06.822 CC examples/ioat/perf/perf.o 00:03:06.822 CC examples/ioat/verify/verify.o 00:03:06.822 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:06.822 CC test/app/histogram_perf/histogram_perf.o 00:03:06.822 CC test/thread/poller_perf/poller_perf.o 00:03:06.822 CC examples/util/zipf/zipf.o 00:03:06.822 CC test/app/jsoncat/jsoncat.o 00:03:06.822 CC test/env/vtophys/vtophys.o 00:03:06.822 CC test/env/memory/memory_ut.o 00:03:06.822 CC app/fio/nvme/fio_plugin.o 00:03:06.822 CC test/app/stub/stub.o 00:03:06.822 CC test/dma/test_dma/test_dma.o 00:03:06.822 CC test/app/bdev_svc/bdev_svc.o 00:03:06.822 CC app/fio/bdev/fio_plugin.o 00:03:07.086 LINK spdk_lspci 00:03:07.086 CC test/env/mem_callbacks/mem_callbacks.o 00:03:07.086 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:07.086 LINK rpc_client_test 00:03:07.086 LINK histogram_perf 00:03:07.086 LINK spdk_nvme_discover 00:03:07.086 LINK jsoncat 00:03:07.086 LINK interrupt_tgt 00:03:07.086 LINK poller_perf 00:03:07.086 LINK vtophys 00:03:07.086 CXX test/cpp_headers/crc64.o 00:03:07.086 CXX test/cpp_headers/dif.o 00:03:07.086 LINK env_dpdk_post_init 00:03:07.086 CXX test/cpp_headers/dma.o 00:03:07.086 LINK zipf 00:03:07.086 CXX test/cpp_headers/endian.o 00:03:07.353 CXX test/cpp_headers/env_dpdk.o 00:03:07.353 LINK iscsi_tgt 00:03:07.353 LINK nvmf_tgt 00:03:07.353 CXX test/cpp_headers/env.o 00:03:07.353 CXX test/cpp_headers/event.o 00:03:07.353 CXX test/cpp_headers/fd_group.o 00:03:07.353 CXX test/cpp_headers/fd.o 00:03:07.353 CXX test/cpp_headers/file.o 00:03:07.353 CXX test/cpp_headers/fsdev.o 00:03:07.353 CXX test/cpp_headers/fsdev_module.o 00:03:07.353 LINK stub 00:03:07.353 CXX test/cpp_headers/ftl.o 00:03:07.353 LINK spdk_tgt 00:03:07.353 CXX test/cpp_headers/fuse_dispatcher.o 00:03:07.353 LINK spdk_trace_record 00:03:07.353 CXX test/cpp_headers/gpt_spec.o 00:03:07.353 LINK bdev_svc 00:03:07.353 CXX test/cpp_headers/hexlify.o 00:03:07.353 LINK ioat_perf 00:03:07.353 CXX test/cpp_headers/histogram_data.o 00:03:07.353 LINK verify 00:03:07.353 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:07.353 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:07.353 CXX test/cpp_headers/idxd.o 00:03:07.353 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:07.353 CXX test/cpp_headers/idxd_spec.o 00:03:07.616 CXX test/cpp_headers/init.o 00:03:07.616 CXX test/cpp_headers/ioat.o 00:03:07.616 CXX test/cpp_headers/ioat_spec.o 00:03:07.616 CXX test/cpp_headers/iscsi_spec.o 00:03:07.616 CXX test/cpp_headers/json.o 00:03:07.616 CXX test/cpp_headers/jsonrpc.o 00:03:07.616 CXX test/cpp_headers/keyring.o 00:03:07.616 CXX test/cpp_headers/keyring_module.o 00:03:07.616 LINK spdk_dd 00:03:07.616 CXX test/cpp_headers/likely.o 00:03:07.616 CXX test/cpp_headers/log.o 00:03:07.616 CXX test/cpp_headers/lvol.o 00:03:07.616 CXX test/cpp_headers/md5.o 00:03:07.616 CXX test/cpp_headers/memory.o 00:03:07.616 LINK spdk_trace 00:03:07.616 CXX test/cpp_headers/mmio.o 00:03:07.616 CXX test/cpp_headers/nbd.o 00:03:07.616 CXX test/cpp_headers/net.o 00:03:07.616 CXX test/cpp_headers/notify.o 00:03:07.616 CXX test/cpp_headers/nvme.o 00:03:07.616 CXX test/cpp_headers/nvme_intel.o 00:03:07.616 CXX test/cpp_headers/nvme_ocssd.o 00:03:07.616 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:07.616 CXX test/cpp_headers/nvme_spec.o 00:03:07.616 CXX test/cpp_headers/nvme_zns.o 00:03:07.616 CXX test/cpp_headers/nvmf_cmd.o 00:03:07.616 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:07.616 CXX test/cpp_headers/nvmf.o 00:03:07.616 CXX test/cpp_headers/nvmf_spec.o 00:03:07.877 CXX test/cpp_headers/nvmf_transport.o 00:03:07.877 LINK pci_ut 00:03:07.877 CXX test/cpp_headers/opal.o 00:03:07.877 CC test/event/event_perf/event_perf.o 00:03:07.877 CXX test/cpp_headers/opal_spec.o 00:03:07.877 CC examples/sock/hello_world/hello_sock.o 00:03:07.877 CXX test/cpp_headers/pci_ids.o 00:03:07.877 CC test/event/reactor/reactor.o 00:03:07.877 CC examples/vmd/lsvmd/lsvmd.o 00:03:07.877 CC examples/idxd/perf/perf.o 00:03:07.877 CXX test/cpp_headers/pipe.o 00:03:08.139 CC examples/thread/thread/thread_ex.o 00:03:08.139 CXX test/cpp_headers/queue.o 00:03:08.139 LINK nvme_fuzz 00:03:08.139 LINK test_dma 00:03:08.139 CC test/event/reactor_perf/reactor_perf.o 00:03:08.139 CXX test/cpp_headers/reduce.o 00:03:08.139 CXX test/cpp_headers/rpc.o 00:03:08.139 CXX test/cpp_headers/scheduler.o 00:03:08.139 CXX test/cpp_headers/scsi.o 00:03:08.139 CXX test/cpp_headers/scsi_spec.o 00:03:08.139 CXX test/cpp_headers/sock.o 00:03:08.139 CXX test/cpp_headers/stdinc.o 00:03:08.139 CXX test/cpp_headers/string.o 00:03:08.139 LINK spdk_bdev 00:03:08.139 CXX test/cpp_headers/thread.o 00:03:08.139 CC test/event/app_repeat/app_repeat.o 00:03:08.139 CXX test/cpp_headers/trace.o 00:03:08.139 CXX test/cpp_headers/trace_parser.o 00:03:08.139 CC examples/vmd/led/led.o 00:03:08.139 CXX test/cpp_headers/tree.o 00:03:08.139 CXX test/cpp_headers/ublk.o 00:03:08.139 CXX test/cpp_headers/util.o 00:03:08.139 CC test/event/scheduler/scheduler.o 00:03:08.139 CXX test/cpp_headers/uuid.o 00:03:08.139 CXX test/cpp_headers/version.o 00:03:08.139 CXX test/cpp_headers/vfio_user_pci.o 00:03:08.139 CXX test/cpp_headers/vfio_user_spec.o 00:03:08.139 CXX test/cpp_headers/vhost.o 00:03:08.139 LINK event_perf 00:03:08.139 LINK spdk_nvme 00:03:08.139 CXX test/cpp_headers/vmd.o 00:03:08.139 CXX test/cpp_headers/xor.o 00:03:08.139 LINK lsvmd 00:03:08.139 LINK mem_callbacks 00:03:08.139 CXX test/cpp_headers/zipf.o 00:03:08.139 LINK reactor 00:03:08.400 CC app/vhost/vhost.o 00:03:08.400 LINK reactor_perf 00:03:08.400 LINK led 00:03:08.400 LINK app_repeat 00:03:08.400 LINK hello_sock 00:03:08.658 LINK vhost_fuzz 00:03:08.658 LINK thread 00:03:08.658 LINK idxd_perf 00:03:08.658 LINK vhost 00:03:08.658 CC test/nvme/e2edp/nvme_dp.o 00:03:08.658 CC test/nvme/reset/reset.o 00:03:08.658 CC test/nvme/err_injection/err_injection.o 00:03:08.659 CC test/nvme/aer/aer.o 00:03:08.659 CC test/nvme/compliance/nvme_compliance.o 00:03:08.659 CC test/nvme/reserve/reserve.o 00:03:08.659 CC test/nvme/sgl/sgl.o 00:03:08.659 CC test/nvme/boot_partition/boot_partition.o 00:03:08.659 LINK spdk_nvme_perf 00:03:08.659 CC test/nvme/overhead/overhead.o 00:03:08.659 CC test/nvme/simple_copy/simple_copy.o 00:03:08.659 CC test/nvme/cuse/cuse.o 00:03:08.659 CC test/nvme/startup/startup.o 00:03:08.659 CC test/nvme/fused_ordering/fused_ordering.o 00:03:08.659 CC test/nvme/connect_stress/connect_stress.o 00:03:08.659 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:08.659 CC test/nvme/fdp/fdp.o 00:03:08.659 LINK scheduler 00:03:08.659 CC test/accel/dif/dif.o 00:03:08.659 CC test/blobfs/mkfs/mkfs.o 00:03:08.659 LINK spdk_top 00:03:08.917 CC test/lvol/esnap/esnap.o 00:03:08.917 LINK spdk_nvme_identify 00:03:08.917 LINK boot_partition 00:03:08.917 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:08.917 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:08.917 CC examples/nvme/abort/abort.o 00:03:08.917 CC examples/nvme/hotplug/hotplug.o 00:03:08.917 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:08.917 CC examples/nvme/arbitration/arbitration.o 00:03:08.917 CC examples/nvme/hello_world/hello_world.o 00:03:08.917 CC examples/nvme/reconnect/reconnect.o 00:03:08.917 LINK connect_stress 00:03:08.917 LINK fused_ordering 00:03:08.917 LINK reserve 00:03:08.917 LINK startup 00:03:09.176 LINK err_injection 00:03:09.176 LINK reset 00:03:09.176 LINK doorbell_aers 00:03:09.176 CC examples/accel/perf/accel_perf.o 00:03:09.176 LINK aer 00:03:09.176 LINK simple_copy 00:03:09.176 CC examples/blob/cli/blobcli.o 00:03:09.176 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:09.176 CC examples/blob/hello_world/hello_blob.o 00:03:09.176 LINK mkfs 00:03:09.176 LINK sgl 00:03:09.176 LINK nvme_dp 00:03:09.176 LINK pmr_persistence 00:03:09.176 LINK fdp 00:03:09.176 LINK overhead 00:03:09.176 LINK nvme_compliance 00:03:09.176 LINK cmb_copy 00:03:09.176 LINK memory_ut 00:03:09.176 LINK hello_world 00:03:09.434 LINK hotplug 00:03:09.434 LINK hello_blob 00:03:09.434 LINK reconnect 00:03:09.434 LINK abort 00:03:09.692 LINK arbitration 00:03:09.692 LINK hello_fsdev 00:03:09.692 LINK nvme_manage 00:03:09.951 LINK accel_perf 00:03:09.951 LINK blobcli 00:03:09.951 LINK dif 00:03:10.210 CC examples/bdev/hello_world/hello_bdev.o 00:03:10.210 CC examples/bdev/bdevperf/bdevperf.o 00:03:10.469 CC test/bdev/bdevio/bdevio.o 00:03:10.469 LINK hello_bdev 00:03:10.469 LINK cuse 00:03:10.469 LINK iscsi_fuzz 00:03:10.727 LINK bdevio 00:03:11.291 LINK bdevperf 00:03:11.548 CC examples/nvmf/nvmf/nvmf.o 00:03:12.116 LINK nvmf 00:03:16.326 LINK esnap 00:03:16.326 00:03:16.326 real 1m22.225s 00:03:16.326 user 13m11.883s 00:03:16.326 sys 2m34.127s 00:03:16.326 00:40:48 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:16.326 00:40:48 make -- common/autotest_common.sh@10 -- $ set +x 00:03:16.326 ************************************ 00:03:16.326 END TEST make 00:03:16.326 ************************************ 00:03:16.326 00:40:48 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:16.326 00:40:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:16.326 00:40:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:16.326 00:40:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.326 00:40:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:16.326 00:40:48 -- pm/common@44 -- $ pid=2741160 00:03:16.326 00:40:48 -- pm/common@50 -- $ kill -TERM 2741160 00:03:16.326 00:40:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.326 00:40:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:16.326 00:40:48 -- pm/common@44 -- $ pid=2741162 00:03:16.326 00:40:48 -- pm/common@50 -- $ kill -TERM 2741162 00:03:16.326 00:40:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.326 00:40:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:16.326 00:40:48 -- pm/common@44 -- $ pid=2741164 00:03:16.326 00:40:48 -- pm/common@50 -- $ kill -TERM 2741164 00:03:16.326 00:40:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.326 00:40:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:16.326 00:40:48 -- pm/common@44 -- $ pid=2741193 00:03:16.326 00:40:48 -- pm/common@50 -- $ sudo -E kill -TERM 2741193 00:03:16.326 00:40:48 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:16.326 00:40:48 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:16.326 00:40:48 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:16.326 00:40:48 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:16.326 00:40:48 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:16.585 00:40:49 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:16.585 00:40:49 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:16.585 00:40:49 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:16.585 00:40:49 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:16.585 00:40:49 -- scripts/common.sh@336 -- # IFS=.-: 00:03:16.585 00:40:49 -- scripts/common.sh@336 -- # read -ra ver1 00:03:16.585 00:40:49 -- scripts/common.sh@337 -- # IFS=.-: 00:03:16.585 00:40:49 -- scripts/common.sh@337 -- # read -ra ver2 00:03:16.585 00:40:49 -- scripts/common.sh@338 -- # local 'op=<' 00:03:16.585 00:40:49 -- scripts/common.sh@340 -- # ver1_l=2 00:03:16.585 00:40:49 -- scripts/common.sh@341 -- # ver2_l=1 00:03:16.585 00:40:49 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:16.585 00:40:49 -- scripts/common.sh@344 -- # case "$op" in 00:03:16.585 00:40:49 -- scripts/common.sh@345 -- # : 1 00:03:16.585 00:40:49 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:16.585 00:40:49 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:16.585 00:40:49 -- scripts/common.sh@365 -- # decimal 1 00:03:16.585 00:40:49 -- scripts/common.sh@353 -- # local d=1 00:03:16.585 00:40:49 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:16.585 00:40:49 -- scripts/common.sh@355 -- # echo 1 00:03:16.585 00:40:49 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:16.585 00:40:49 -- scripts/common.sh@366 -- # decimal 2 00:03:16.585 00:40:49 -- scripts/common.sh@353 -- # local d=2 00:03:16.585 00:40:49 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:16.585 00:40:49 -- scripts/common.sh@355 -- # echo 2 00:03:16.585 00:40:49 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:16.585 00:40:49 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:16.585 00:40:49 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:16.585 00:40:49 -- scripts/common.sh@368 -- # return 0 00:03:16.585 00:40:49 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:16.585 00:40:49 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:16.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.585 --rc genhtml_branch_coverage=1 00:03:16.585 --rc genhtml_function_coverage=1 00:03:16.585 --rc genhtml_legend=1 00:03:16.585 --rc geninfo_all_blocks=1 00:03:16.585 --rc geninfo_unexecuted_blocks=1 00:03:16.585 00:03:16.585 ' 00:03:16.585 00:40:49 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:16.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.585 --rc genhtml_branch_coverage=1 00:03:16.585 --rc genhtml_function_coverage=1 00:03:16.585 --rc genhtml_legend=1 00:03:16.585 --rc geninfo_all_blocks=1 00:03:16.585 --rc geninfo_unexecuted_blocks=1 00:03:16.585 00:03:16.585 ' 00:03:16.585 00:40:49 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:16.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.585 --rc genhtml_branch_coverage=1 00:03:16.585 --rc genhtml_function_coverage=1 00:03:16.585 --rc genhtml_legend=1 00:03:16.585 --rc geninfo_all_blocks=1 00:03:16.585 --rc geninfo_unexecuted_blocks=1 00:03:16.585 00:03:16.585 ' 00:03:16.585 00:40:49 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:16.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.585 --rc genhtml_branch_coverage=1 00:03:16.585 --rc genhtml_function_coverage=1 00:03:16.585 --rc genhtml_legend=1 00:03:16.585 --rc geninfo_all_blocks=1 00:03:16.585 --rc geninfo_unexecuted_blocks=1 00:03:16.585 00:03:16.585 ' 00:03:16.585 00:40:49 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:16.585 00:40:49 -- nvmf/common.sh@7 -- # uname -s 00:03:16.585 00:40:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:16.585 00:40:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:16.585 00:40:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:16.585 00:40:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:16.585 00:40:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:16.585 00:40:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:16.585 00:40:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:16.585 00:40:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:16.585 00:40:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:16.585 00:40:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:16.585 00:40:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:16.585 00:40:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:16.585 00:40:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:16.585 00:40:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:16.585 00:40:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:16.585 00:40:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:16.586 00:40:49 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:16.586 00:40:49 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:16.586 00:40:49 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:16.586 00:40:49 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:16.586 00:40:49 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:16.586 00:40:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.586 00:40:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.586 00:40:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.586 00:40:49 -- paths/export.sh@5 -- # export PATH 00:03:16.586 00:40:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.586 00:40:49 -- nvmf/common.sh@51 -- # : 0 00:03:16.586 00:40:49 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:16.586 00:40:49 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:16.586 00:40:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:16.586 00:40:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:16.586 00:40:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:16.586 00:40:49 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:16.586 00:40:49 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:16.586 00:40:49 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:16.586 00:40:49 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:16.586 00:40:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:16.586 00:40:49 -- spdk/autotest.sh@32 -- # uname -s 00:03:16.586 00:40:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:16.586 00:40:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:16.586 00:40:49 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:16.586 00:40:49 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:16.586 00:40:49 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:16.586 00:40:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:16.586 00:40:49 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:16.586 00:40:49 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:16.586 00:40:49 -- spdk/autotest.sh@48 -- # udevadm_pid=2802155 00:03:16.586 00:40:49 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:16.586 00:40:49 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:16.586 00:40:49 -- pm/common@17 -- # local monitor 00:03:16.586 00:40:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.586 00:40:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.586 00:40:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.586 00:40:49 -- pm/common@21 -- # date +%s 00:03:16.586 00:40:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.586 00:40:49 -- pm/common@21 -- # date +%s 00:03:16.586 00:40:49 -- pm/common@25 -- # sleep 1 00:03:16.586 00:40:49 -- pm/common@21 -- # date +%s 00:03:16.586 00:40:49 -- pm/common@21 -- # date +%s 00:03:16.586 00:40:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733442049 00:03:16.586 00:40:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733442049 00:03:16.586 00:40:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733442049 00:03:16.586 00:40:49 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733442049 00:03:16.586 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733442049_collect-vmstat.pm.log 00:03:16.586 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733442049_collect-cpu-load.pm.log 00:03:16.586 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733442049_collect-cpu-temp.pm.log 00:03:16.586 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733442049_collect-bmc-pm.bmc.pm.log 00:03:17.520 00:40:50 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:17.520 00:40:50 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:17.520 00:40:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:17.520 00:40:50 -- common/autotest_common.sh@10 -- # set +x 00:03:17.520 00:40:50 -- spdk/autotest.sh@59 -- # create_test_list 00:03:17.520 00:40:50 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:17.520 00:40:50 -- common/autotest_common.sh@10 -- # set +x 00:03:17.520 00:40:50 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:17.520 00:40:50 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:17.520 00:40:50 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:17.520 00:40:50 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:17.520 00:40:50 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:17.520 00:40:50 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:17.520 00:40:50 -- common/autotest_common.sh@1457 -- # uname 00:03:17.520 00:40:50 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:17.520 00:40:50 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:17.520 00:40:50 -- common/autotest_common.sh@1477 -- # uname 00:03:17.520 00:40:50 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:17.520 00:40:50 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:17.520 00:40:50 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:17.520 lcov: LCOV version 1.15 00:03:17.520 00:40:50 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:49.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:49.587 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:54.851 00:41:27 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:54.851 00:41:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:54.851 00:41:27 -- common/autotest_common.sh@10 -- # set +x 00:03:54.851 00:41:27 -- spdk/autotest.sh@78 -- # rm -f 00:03:54.851 00:41:27 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:55.890 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:55.890 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:55.890 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:55.890 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:55.890 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:55.890 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:55.890 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:55.890 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:55.890 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:55.890 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:55.890 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:55.890 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:55.890 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:55.890 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:55.890 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:55.890 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:55.890 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:55.890 00:41:28 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:55.890 00:41:28 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:55.890 00:41:28 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:55.890 00:41:28 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:55.890 00:41:28 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:55.890 00:41:28 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:55.890 00:41:28 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:55.890 00:41:28 -- common/autotest_common.sh@1669 -- # bdf=0000:88:00.0 00:03:55.890 00:41:28 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:55.890 00:41:28 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:55.890 00:41:28 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:55.890 00:41:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:55.890 00:41:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.890 00:41:28 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:55.890 00:41:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.890 00:41:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:55.890 00:41:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:55.890 00:41:28 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:55.890 00:41:28 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:56.173 No valid GPT data, bailing 00:03:56.174 00:41:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:56.174 00:41:28 -- scripts/common.sh@394 -- # pt= 00:03:56.174 00:41:28 -- scripts/common.sh@395 -- # return 1 00:03:56.174 00:41:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:56.174 1+0 records in 00:03:56.174 1+0 records out 00:03:56.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00238619 s, 439 MB/s 00:03:56.174 00:41:28 -- spdk/autotest.sh@105 -- # sync 00:03:56.174 00:41:28 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:56.174 00:41:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:56.174 00:41:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:58.074 00:41:30 -- spdk/autotest.sh@111 -- # uname -s 00:03:58.074 00:41:30 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:58.074 00:41:30 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:58.074 00:41:30 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:59.008 Hugepages 00:03:59.008 node hugesize free / total 00:03:59.008 node0 1048576kB 0 / 0 00:03:59.008 node0 2048kB 0 / 0 00:03:59.008 node1 1048576kB 0 / 0 00:03:59.008 node1 2048kB 0 / 0 00:03:59.008 00:03:59.008 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:59.008 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:59.008 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:59.008 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:59.008 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:59.008 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:59.008 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:59.008 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:59.008 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:59.008 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:59.008 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:59.008 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:59.008 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:59.008 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:59.008 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:59.008 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:59.008 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:59.266 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:59.266 00:41:31 -- spdk/autotest.sh@117 -- # uname -s 00:03:59.266 00:41:31 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:59.266 00:41:31 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:59.266 00:41:31 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:00.202 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:00.202 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:00.202 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:00.202 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:00.202 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:00.202 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:00.202 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:00.202 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:00.202 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:00.202 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:00.202 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:00.460 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:00.460 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:00.460 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:00.460 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:00.460 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:01.394 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:01.394 00:41:33 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:02.328 00:41:34 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:02.328 00:41:34 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:02.328 00:41:34 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:02.328 00:41:34 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:02.328 00:41:34 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:02.328 00:41:34 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:02.328 00:41:34 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:02.328 00:41:34 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:02.328 00:41:34 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:02.328 00:41:35 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:02.328 00:41:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:04:02.328 00:41:35 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:03.701 Waiting for block devices as requested 00:04:03.701 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:03.701 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:03.959 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:03.959 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:03.959 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:04.218 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:04.218 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:04.218 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:04.218 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:04.476 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:04.476 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:04.476 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:04.476 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:04.735 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:04.735 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:04.735 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:04.735 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:04.994 00:41:37 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:04.994 00:41:37 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:04.994 00:41:37 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:04.994 00:41:37 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:04:04.994 00:41:37 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:04.994 00:41:37 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:04.994 00:41:37 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:04.994 00:41:37 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:04.994 00:41:37 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:04.994 00:41:37 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:04.994 00:41:37 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:04.994 00:41:37 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:04.994 00:41:37 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:04.994 00:41:37 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:04.994 00:41:37 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:04.994 00:41:37 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:04.994 00:41:37 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:04.994 00:41:37 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:04.994 00:41:37 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:04.994 00:41:37 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:04.994 00:41:37 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:04.995 00:41:37 -- common/autotest_common.sh@1543 -- # continue 00:04:04.995 00:41:37 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:04.995 00:41:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:04.995 00:41:37 -- common/autotest_common.sh@10 -- # set +x 00:04:04.995 00:41:37 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:04.995 00:41:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.995 00:41:37 -- common/autotest_common.sh@10 -- # set +x 00:04:04.995 00:41:37 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.374 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:06.374 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:06.374 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:06.374 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:06.374 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:06.374 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:06.374 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:06.374 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:06.374 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:06.374 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:06.374 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:06.374 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:06.374 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:06.374 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:06.374 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:06.374 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:07.311 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:07.311 00:41:39 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:07.311 00:41:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:07.311 00:41:39 -- common/autotest_common.sh@10 -- # set +x 00:04:07.311 00:41:39 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:07.311 00:41:39 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:07.311 00:41:39 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:07.311 00:41:39 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:07.311 00:41:39 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:07.311 00:41:39 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:07.311 00:41:39 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:07.311 00:41:39 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:07.311 00:41:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:07.311 00:41:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:07.311 00:41:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:07.311 00:41:39 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:07.311 00:41:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:07.311 00:41:40 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:07.311 00:41:40 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:04:07.311 00:41:40 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:07.311 00:41:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:07.568 00:41:40 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:07.568 00:41:40 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:07.568 00:41:40 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:07.568 00:41:40 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:07.568 00:41:40 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:04:07.568 00:41:40 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:04:07.568 00:41:40 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2812571 00:04:07.568 00:41:40 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:07.568 00:41:40 -- common/autotest_common.sh@1585 -- # waitforlisten 2812571 00:04:07.568 00:41:40 -- common/autotest_common.sh@835 -- # '[' -z 2812571 ']' 00:04:07.568 00:41:40 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.568 00:41:40 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:07.568 00:41:40 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.568 00:41:40 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:07.568 00:41:40 -- common/autotest_common.sh@10 -- # set +x 00:04:07.569 [2024-12-06 00:41:40.150075] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:07.569 [2024-12-06 00:41:40.150263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2812571 ] 00:04:07.827 [2024-12-06 00:41:40.285426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.827 [2024-12-06 00:41:40.423873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.760 00:41:41 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:08.760 00:41:41 -- common/autotest_common.sh@868 -- # return 0 00:04:08.760 00:41:41 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:08.760 00:41:41 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:08.760 00:41:41 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:12.056 nvme0n1 00:04:12.056 00:41:44 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:12.313 [2024-12-06 00:41:44.801507] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:12.313 [2024-12-06 00:41:44.801581] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:12.313 request: 00:04:12.313 { 00:04:12.313 "nvme_ctrlr_name": "nvme0", 00:04:12.313 "password": "test", 00:04:12.313 "method": "bdev_nvme_opal_revert", 00:04:12.313 "req_id": 1 00:04:12.313 } 00:04:12.313 Got JSON-RPC error response 00:04:12.313 response: 00:04:12.313 { 00:04:12.313 "code": -32603, 00:04:12.313 "message": "Internal error" 00:04:12.313 } 00:04:12.313 00:41:44 -- common/autotest_common.sh@1591 -- # true 00:04:12.313 00:41:44 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:12.313 00:41:44 -- common/autotest_common.sh@1595 -- # killprocess 2812571 00:04:12.313 00:41:44 -- common/autotest_common.sh@954 -- # '[' -z 2812571 ']' 00:04:12.313 00:41:44 -- common/autotest_common.sh@958 -- # kill -0 2812571 00:04:12.313 00:41:44 -- common/autotest_common.sh@959 -- # uname 00:04:12.313 00:41:44 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:12.313 00:41:44 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2812571 00:04:12.313 00:41:44 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:12.313 00:41:44 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:12.313 00:41:44 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2812571' 00:04:12.313 killing process with pid 2812571 00:04:12.313 00:41:44 -- common/autotest_common.sh@973 -- # kill 2812571 00:04:12.313 00:41:44 -- common/autotest_common.sh@978 -- # wait 2812571 00:04:16.504 00:41:48 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:16.504 00:41:48 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:16.504 00:41:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:16.504 00:41:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:16.504 00:41:48 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:16.504 00:41:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.504 00:41:48 -- common/autotest_common.sh@10 -- # set +x 00:04:16.504 00:41:48 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:16.504 00:41:48 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:16.504 00:41:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.504 00:41:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.504 00:41:48 -- common/autotest_common.sh@10 -- # set +x 00:04:16.504 ************************************ 00:04:16.504 START TEST env 00:04:16.504 ************************************ 00:04:16.504 00:41:48 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:16.504 * Looking for test storage... 00:04:16.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:16.504 00:41:48 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:16.504 00:41:48 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:16.504 00:41:48 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:16.504 00:41:48 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:16.504 00:41:48 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.504 00:41:48 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.504 00:41:48 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.504 00:41:48 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.504 00:41:48 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.504 00:41:48 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.504 00:41:48 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.504 00:41:48 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.504 00:41:48 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.504 00:41:48 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.504 00:41:48 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.504 00:41:48 env -- scripts/common.sh@344 -- # case "$op" in 00:04:16.504 00:41:48 env -- scripts/common.sh@345 -- # : 1 00:04:16.504 00:41:48 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.504 00:41:48 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.504 00:41:48 env -- scripts/common.sh@365 -- # decimal 1 00:04:16.504 00:41:48 env -- scripts/common.sh@353 -- # local d=1 00:04:16.504 00:41:48 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.504 00:41:48 env -- scripts/common.sh@355 -- # echo 1 00:04:16.504 00:41:48 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.504 00:41:48 env -- scripts/common.sh@366 -- # decimal 2 00:04:16.504 00:41:48 env -- scripts/common.sh@353 -- # local d=2 00:04:16.504 00:41:48 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.504 00:41:48 env -- scripts/common.sh@355 -- # echo 2 00:04:16.504 00:41:48 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.504 00:41:48 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.504 00:41:48 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.504 00:41:48 env -- scripts/common.sh@368 -- # return 0 00:04:16.504 00:41:48 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.504 00:41:48 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:16.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.504 --rc genhtml_branch_coverage=1 00:04:16.504 --rc genhtml_function_coverage=1 00:04:16.504 --rc genhtml_legend=1 00:04:16.504 --rc geninfo_all_blocks=1 00:04:16.504 --rc geninfo_unexecuted_blocks=1 00:04:16.504 00:04:16.504 ' 00:04:16.504 00:41:48 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:16.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.504 --rc genhtml_branch_coverage=1 00:04:16.504 --rc genhtml_function_coverage=1 00:04:16.504 --rc genhtml_legend=1 00:04:16.504 --rc geninfo_all_blocks=1 00:04:16.504 --rc geninfo_unexecuted_blocks=1 00:04:16.504 00:04:16.504 ' 00:04:16.504 00:41:48 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:16.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.504 --rc genhtml_branch_coverage=1 00:04:16.504 --rc genhtml_function_coverage=1 00:04:16.504 --rc genhtml_legend=1 00:04:16.504 --rc geninfo_all_blocks=1 00:04:16.504 --rc geninfo_unexecuted_blocks=1 00:04:16.504 00:04:16.504 ' 00:04:16.504 00:41:48 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:16.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.504 --rc genhtml_branch_coverage=1 00:04:16.504 --rc genhtml_function_coverage=1 00:04:16.504 --rc genhtml_legend=1 00:04:16.504 --rc geninfo_all_blocks=1 00:04:16.504 --rc geninfo_unexecuted_blocks=1 00:04:16.504 00:04:16.504 ' 00:04:16.504 00:41:48 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:16.504 00:41:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.504 00:41:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.504 00:41:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.504 ************************************ 00:04:16.504 START TEST env_memory 00:04:16.504 ************************************ 00:04:16.504 00:41:48 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:16.504 00:04:16.504 00:04:16.504 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.505 http://cunit.sourceforge.net/ 00:04:16.505 00:04:16.505 00:04:16.505 Suite: memory 00:04:16.505 Test: alloc and free memory map ...[2024-12-06 00:41:48.774347] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:16.505 passed 00:04:16.505 Test: mem map translation ...[2024-12-06 00:41:48.816263] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:16.505 [2024-12-06 00:41:48.816325] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:16.505 [2024-12-06 00:41:48.816406] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:16.505 [2024-12-06 00:41:48.816456] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:16.505 passed 00:04:16.505 Test: mem map registration ...[2024-12-06 00:41:48.886655] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:16.505 [2024-12-06 00:41:48.886714] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:16.505 passed 00:04:16.505 Test: mem map adjacent registrations ...passed 00:04:16.505 00:04:16.505 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.505 suites 1 1 n/a 0 0 00:04:16.505 tests 4 4 4 0 0 00:04:16.505 asserts 152 152 152 0 n/a 00:04:16.505 00:04:16.505 Elapsed time = 0.240 seconds 00:04:16.505 00:04:16.505 real 0m0.261s 00:04:16.505 user 0m0.243s 00:04:16.505 sys 0m0.017s 00:04:16.505 00:41:48 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.505 00:41:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:16.505 ************************************ 00:04:16.505 END TEST env_memory 00:04:16.505 ************************************ 00:04:16.505 00:41:49 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:16.505 00:41:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.505 00:41:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.505 00:41:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.505 ************************************ 00:04:16.505 START TEST env_vtophys 00:04:16.505 ************************************ 00:04:16.505 00:41:49 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:16.505 EAL: lib.eal log level changed from notice to debug 00:04:16.505 EAL: Detected lcore 0 as core 0 on socket 0 00:04:16.505 EAL: Detected lcore 1 as core 1 on socket 0 00:04:16.505 EAL: Detected lcore 2 as core 2 on socket 0 00:04:16.505 EAL: Detected lcore 3 as core 3 on socket 0 00:04:16.505 EAL: Detected lcore 4 as core 4 on socket 0 00:04:16.505 EAL: Detected lcore 5 as core 5 on socket 0 00:04:16.505 EAL: Detected lcore 6 as core 8 on socket 0 00:04:16.505 EAL: Detected lcore 7 as core 9 on socket 0 00:04:16.505 EAL: Detected lcore 8 as core 10 on socket 0 00:04:16.505 EAL: Detected lcore 9 as core 11 on socket 0 00:04:16.505 EAL: Detected lcore 10 as core 12 on socket 0 00:04:16.505 EAL: Detected lcore 11 as core 13 on socket 0 00:04:16.505 EAL: Detected lcore 12 as core 0 on socket 1 00:04:16.505 EAL: Detected lcore 13 as core 1 on socket 1 00:04:16.505 EAL: Detected lcore 14 as core 2 on socket 1 00:04:16.505 EAL: Detected lcore 15 as core 3 on socket 1 00:04:16.505 EAL: Detected lcore 16 as core 4 on socket 1 00:04:16.505 EAL: Detected lcore 17 as core 5 on socket 1 00:04:16.505 EAL: Detected lcore 18 as core 8 on socket 1 00:04:16.505 EAL: Detected lcore 19 as core 9 on socket 1 00:04:16.505 EAL: Detected lcore 20 as core 10 on socket 1 00:04:16.505 EAL: Detected lcore 21 as core 11 on socket 1 00:04:16.505 EAL: Detected lcore 22 as core 12 on socket 1 00:04:16.505 EAL: Detected lcore 23 as core 13 on socket 1 00:04:16.505 EAL: Detected lcore 24 as core 0 on socket 0 00:04:16.505 EAL: Detected lcore 25 as core 1 on socket 0 00:04:16.505 EAL: Detected lcore 26 as core 2 on socket 0 00:04:16.505 EAL: Detected lcore 27 as core 3 on socket 0 00:04:16.505 EAL: Detected lcore 28 as core 4 on socket 0 00:04:16.505 EAL: Detected lcore 29 as core 5 on socket 0 00:04:16.505 EAL: Detected lcore 30 as core 8 on socket 0 00:04:16.505 EAL: Detected lcore 31 as core 9 on socket 0 00:04:16.505 EAL: Detected lcore 32 as core 10 on socket 0 00:04:16.505 EAL: Detected lcore 33 as core 11 on socket 0 00:04:16.505 EAL: Detected lcore 34 as core 12 on socket 0 00:04:16.505 EAL: Detected lcore 35 as core 13 on socket 0 00:04:16.505 EAL: Detected lcore 36 as core 0 on socket 1 00:04:16.505 EAL: Detected lcore 37 as core 1 on socket 1 00:04:16.505 EAL: Detected lcore 38 as core 2 on socket 1 00:04:16.505 EAL: Detected lcore 39 as core 3 on socket 1 00:04:16.505 EAL: Detected lcore 40 as core 4 on socket 1 00:04:16.505 EAL: Detected lcore 41 as core 5 on socket 1 00:04:16.505 EAL: Detected lcore 42 as core 8 on socket 1 00:04:16.505 EAL: Detected lcore 43 as core 9 on socket 1 00:04:16.505 EAL: Detected lcore 44 as core 10 on socket 1 00:04:16.505 EAL: Detected lcore 45 as core 11 on socket 1 00:04:16.505 EAL: Detected lcore 46 as core 12 on socket 1 00:04:16.505 EAL: Detected lcore 47 as core 13 on socket 1 00:04:16.505 EAL: Maximum logical cores by configuration: 128 00:04:16.505 EAL: Detected CPU lcores: 48 00:04:16.505 EAL: Detected NUMA nodes: 2 00:04:16.505 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:16.505 EAL: Detected shared linkage of DPDK 00:04:16.505 EAL: No shared files mode enabled, IPC will be disabled 00:04:16.505 EAL: Bus pci wants IOVA as 'DC' 00:04:16.505 EAL: Buses did not request a specific IOVA mode. 00:04:16.505 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:16.505 EAL: Selected IOVA mode 'VA' 00:04:16.505 EAL: Probing VFIO support... 00:04:16.505 EAL: IOMMU type 1 (Type 1) is supported 00:04:16.505 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:16.505 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:16.505 EAL: VFIO support initialized 00:04:16.505 EAL: Ask a virtual area of 0x2e000 bytes 00:04:16.505 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:16.505 EAL: Setting up physically contiguous memory... 00:04:16.505 EAL: Setting maximum number of open files to 524288 00:04:16.505 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:16.505 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:16.505 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:16.505 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.505 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:16.505 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.505 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.505 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:16.505 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:16.505 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.505 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:16.505 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.505 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.505 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:16.505 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:16.505 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.505 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:16.505 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.505 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.505 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:16.505 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:16.505 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.505 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:16.505 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.505 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.505 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:16.505 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:16.505 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:16.505 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.505 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:16.505 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:16.505 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.505 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:16.505 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:16.505 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.505 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:16.505 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:16.505 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.505 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:16.505 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:16.505 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.505 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:16.505 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:16.505 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.505 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:16.505 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:16.505 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.505 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:16.505 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:16.505 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.505 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:16.505 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:16.505 EAL: Hugepages will be freed exactly as allocated. 00:04:16.505 EAL: No shared files mode enabled, IPC is disabled 00:04:16.505 EAL: No shared files mode enabled, IPC is disabled 00:04:16.505 EAL: TSC frequency is ~2700000 KHz 00:04:16.505 EAL: Main lcore 0 is ready (tid=7f950ecfaa40;cpuset=[0]) 00:04:16.505 EAL: Trying to obtain current memory policy. 00:04:16.505 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.505 EAL: Restoring previous memory policy: 0 00:04:16.505 EAL: request: mp_malloc_sync 00:04:16.505 EAL: No shared files mode enabled, IPC is disabled 00:04:16.505 EAL: Heap on socket 0 was expanded by 2MB 00:04:16.505 EAL: No shared files mode enabled, IPC is disabled 00:04:16.505 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:16.505 EAL: Mem event callback 'spdk:(nil)' registered 00:04:16.764 00:04:16.764 00:04:16.764 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.764 http://cunit.sourceforge.net/ 00:04:16.764 00:04:16.764 00:04:16.764 Suite: components_suite 00:04:17.022 Test: vtophys_malloc_test ...passed 00:04:17.022 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:17.022 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.022 EAL: Restoring previous memory policy: 4 00:04:17.022 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.022 EAL: request: mp_malloc_sync 00:04:17.022 EAL: No shared files mode enabled, IPC is disabled 00:04:17.022 EAL: Heap on socket 0 was expanded by 4MB 00:04:17.022 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.022 EAL: request: mp_malloc_sync 00:04:17.022 EAL: No shared files mode enabled, IPC is disabled 00:04:17.022 EAL: Heap on socket 0 was shrunk by 4MB 00:04:17.022 EAL: Trying to obtain current memory policy. 00:04:17.022 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.022 EAL: Restoring previous memory policy: 4 00:04:17.022 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.022 EAL: request: mp_malloc_sync 00:04:17.022 EAL: No shared files mode enabled, IPC is disabled 00:04:17.022 EAL: Heap on socket 0 was expanded by 6MB 00:04:17.022 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.022 EAL: request: mp_malloc_sync 00:04:17.022 EAL: No shared files mode enabled, IPC is disabled 00:04:17.022 EAL: Heap on socket 0 was shrunk by 6MB 00:04:17.022 EAL: Trying to obtain current memory policy. 00:04:17.022 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.022 EAL: Restoring previous memory policy: 4 00:04:17.022 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.022 EAL: request: mp_malloc_sync 00:04:17.022 EAL: No shared files mode enabled, IPC is disabled 00:04:17.022 EAL: Heap on socket 0 was expanded by 10MB 00:04:17.022 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.022 EAL: request: mp_malloc_sync 00:04:17.022 EAL: No shared files mode enabled, IPC is disabled 00:04:17.022 EAL: Heap on socket 0 was shrunk by 10MB 00:04:17.022 EAL: Trying to obtain current memory policy. 00:04:17.022 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.022 EAL: Restoring previous memory policy: 4 00:04:17.022 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.022 EAL: request: mp_malloc_sync 00:04:17.022 EAL: No shared files mode enabled, IPC is disabled 00:04:17.022 EAL: Heap on socket 0 was expanded by 18MB 00:04:17.022 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.022 EAL: request: mp_malloc_sync 00:04:17.023 EAL: No shared files mode enabled, IPC is disabled 00:04:17.023 EAL: Heap on socket 0 was shrunk by 18MB 00:04:17.281 EAL: Trying to obtain current memory policy. 00:04:17.281 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.281 EAL: Restoring previous memory policy: 4 00:04:17.281 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.281 EAL: request: mp_malloc_sync 00:04:17.281 EAL: No shared files mode enabled, IPC is disabled 00:04:17.281 EAL: Heap on socket 0 was expanded by 34MB 00:04:17.281 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.281 EAL: request: mp_malloc_sync 00:04:17.281 EAL: No shared files mode enabled, IPC is disabled 00:04:17.281 EAL: Heap on socket 0 was shrunk by 34MB 00:04:17.281 EAL: Trying to obtain current memory policy. 00:04:17.281 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.281 EAL: Restoring previous memory policy: 4 00:04:17.281 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.281 EAL: request: mp_malloc_sync 00:04:17.281 EAL: No shared files mode enabled, IPC is disabled 00:04:17.281 EAL: Heap on socket 0 was expanded by 66MB 00:04:17.539 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.539 EAL: request: mp_malloc_sync 00:04:17.539 EAL: No shared files mode enabled, IPC is disabled 00:04:17.539 EAL: Heap on socket 0 was shrunk by 66MB 00:04:17.539 EAL: Trying to obtain current memory policy. 00:04:17.539 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.539 EAL: Restoring previous memory policy: 4 00:04:17.539 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.539 EAL: request: mp_malloc_sync 00:04:17.539 EAL: No shared files mode enabled, IPC is disabled 00:04:17.539 EAL: Heap on socket 0 was expanded by 130MB 00:04:17.798 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.798 EAL: request: mp_malloc_sync 00:04:17.798 EAL: No shared files mode enabled, IPC is disabled 00:04:17.798 EAL: Heap on socket 0 was shrunk by 130MB 00:04:18.057 EAL: Trying to obtain current memory policy. 00:04:18.057 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.057 EAL: Restoring previous memory policy: 4 00:04:18.057 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.057 EAL: request: mp_malloc_sync 00:04:18.057 EAL: No shared files mode enabled, IPC is disabled 00:04:18.057 EAL: Heap on socket 0 was expanded by 258MB 00:04:18.624 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.624 EAL: request: mp_malloc_sync 00:04:18.624 EAL: No shared files mode enabled, IPC is disabled 00:04:18.624 EAL: Heap on socket 0 was shrunk by 258MB 00:04:18.882 EAL: Trying to obtain current memory policy. 00:04:18.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.140 EAL: Restoring previous memory policy: 4 00:04:19.140 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.140 EAL: request: mp_malloc_sync 00:04:19.140 EAL: No shared files mode enabled, IPC is disabled 00:04:19.140 EAL: Heap on socket 0 was expanded by 514MB 00:04:20.089 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.089 EAL: request: mp_malloc_sync 00:04:20.089 EAL: No shared files mode enabled, IPC is disabled 00:04:20.089 EAL: Heap on socket 0 was shrunk by 514MB 00:04:21.024 EAL: Trying to obtain current memory policy. 00:04:21.024 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.283 EAL: Restoring previous memory policy: 4 00:04:21.283 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.283 EAL: request: mp_malloc_sync 00:04:21.283 EAL: No shared files mode enabled, IPC is disabled 00:04:21.283 EAL: Heap on socket 0 was expanded by 1026MB 00:04:23.185 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.442 EAL: request: mp_malloc_sync 00:04:23.443 EAL: No shared files mode enabled, IPC is disabled 00:04:23.443 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:25.347 passed 00:04:25.347 00:04:25.347 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.347 suites 1 1 n/a 0 0 00:04:25.347 tests 2 2 2 0 0 00:04:25.347 asserts 497 497 497 0 n/a 00:04:25.347 00:04:25.347 Elapsed time = 8.277 seconds 00:04:25.347 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.347 EAL: request: mp_malloc_sync 00:04:25.347 EAL: No shared files mode enabled, IPC is disabled 00:04:25.347 EAL: Heap on socket 0 was shrunk by 2MB 00:04:25.347 EAL: No shared files mode enabled, IPC is disabled 00:04:25.347 EAL: No shared files mode enabled, IPC is disabled 00:04:25.347 EAL: No shared files mode enabled, IPC is disabled 00:04:25.347 00:04:25.347 real 0m8.555s 00:04:25.347 user 0m7.429s 00:04:25.347 sys 0m1.070s 00:04:25.347 00:41:57 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.347 00:41:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:25.347 ************************************ 00:04:25.347 END TEST env_vtophys 00:04:25.347 ************************************ 00:04:25.347 00:41:57 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:25.347 00:41:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.347 00:41:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.347 00:41:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.347 ************************************ 00:04:25.347 START TEST env_pci 00:04:25.347 ************************************ 00:04:25.347 00:41:57 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:25.347 00:04:25.347 00:04:25.347 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.347 http://cunit.sourceforge.net/ 00:04:25.347 00:04:25.347 00:04:25.347 Suite: pci 00:04:25.347 Test: pci_hook ...[2024-12-06 00:41:57.655608] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2814667 has claimed it 00:04:25.347 EAL: Cannot find device (10000:00:01.0) 00:04:25.347 EAL: Failed to attach device on primary process 00:04:25.347 passed 00:04:25.347 00:04:25.347 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.347 suites 1 1 n/a 0 0 00:04:25.347 tests 1 1 1 0 0 00:04:25.347 asserts 25 25 25 0 n/a 00:04:25.347 00:04:25.347 Elapsed time = 0.042 seconds 00:04:25.347 00:04:25.347 real 0m0.091s 00:04:25.347 user 0m0.044s 00:04:25.347 sys 0m0.047s 00:04:25.347 00:41:57 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.347 00:41:57 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:25.347 ************************************ 00:04:25.347 END TEST env_pci 00:04:25.347 ************************************ 00:04:25.347 00:41:57 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:25.347 00:41:57 env -- env/env.sh@15 -- # uname 00:04:25.347 00:41:57 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:25.347 00:41:57 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:25.347 00:41:57 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:25.347 00:41:57 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:25.347 00:41:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.347 00:41:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.347 ************************************ 00:04:25.347 START TEST env_dpdk_post_init 00:04:25.347 ************************************ 00:04:25.347 00:41:57 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:25.347 EAL: Detected CPU lcores: 48 00:04:25.347 EAL: Detected NUMA nodes: 2 00:04:25.347 EAL: Detected shared linkage of DPDK 00:04:25.347 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:25.347 EAL: Selected IOVA mode 'VA' 00:04:25.347 EAL: VFIO support initialized 00:04:25.348 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:25.348 EAL: Using IOMMU type 1 (Type 1) 00:04:25.348 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:25.348 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:25.348 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:25.606 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:25.606 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:25.606 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:25.606 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:25.606 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:25.606 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:25.606 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:25.606 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:25.606 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:25.606 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:25.606 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:25.606 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:25.606 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:26.544 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:29.829 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:29.829 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:29.829 Starting DPDK initialization... 00:04:29.829 Starting SPDK post initialization... 00:04:29.829 SPDK NVMe probe 00:04:29.829 Attaching to 0000:88:00.0 00:04:29.829 Attached to 0000:88:00.0 00:04:29.829 Cleaning up... 00:04:29.829 00:04:29.829 real 0m4.575s 00:04:29.829 user 0m3.110s 00:04:29.829 sys 0m0.521s 00:04:29.829 00:42:02 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.829 00:42:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:29.829 ************************************ 00:04:29.829 END TEST env_dpdk_post_init 00:04:29.829 ************************************ 00:04:29.829 00:42:02 env -- env/env.sh@26 -- # uname 00:04:29.829 00:42:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:29.829 00:42:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:29.829 00:42:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.829 00:42:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.829 00:42:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.829 ************************************ 00:04:29.829 START TEST env_mem_callbacks 00:04:29.829 ************************************ 00:04:29.829 00:42:02 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:29.829 EAL: Detected CPU lcores: 48 00:04:29.829 EAL: Detected NUMA nodes: 2 00:04:29.829 EAL: Detected shared linkage of DPDK 00:04:29.829 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:29.829 EAL: Selected IOVA mode 'VA' 00:04:29.829 EAL: VFIO support initialized 00:04:29.829 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:29.829 00:04:29.829 00:04:29.829 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.829 http://cunit.sourceforge.net/ 00:04:29.829 00:04:29.829 00:04:29.829 Suite: memory 00:04:29.829 Test: test ... 00:04:29.829 register 0x200000200000 2097152 00:04:29.829 malloc 3145728 00:04:29.829 register 0x200000400000 4194304 00:04:29.829 buf 0x2000004fffc0 len 3145728 PASSED 00:04:29.829 malloc 64 00:04:29.829 buf 0x2000004ffec0 len 64 PASSED 00:04:29.829 malloc 4194304 00:04:29.829 register 0x200000800000 6291456 00:04:29.829 buf 0x2000009fffc0 len 4194304 PASSED 00:04:29.829 free 0x2000004fffc0 3145728 00:04:29.829 free 0x2000004ffec0 64 00:04:29.829 unregister 0x200000400000 4194304 PASSED 00:04:29.829 free 0x2000009fffc0 4194304 00:04:29.829 unregister 0x200000800000 6291456 PASSED 00:04:29.829 malloc 8388608 00:04:29.829 register 0x200000400000 10485760 00:04:29.829 buf 0x2000005fffc0 len 8388608 PASSED 00:04:29.829 free 0x2000005fffc0 8388608 00:04:30.088 unregister 0x200000400000 10485760 PASSED 00:04:30.088 passed 00:04:30.088 00:04:30.088 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.088 suites 1 1 n/a 0 0 00:04:30.088 tests 1 1 1 0 0 00:04:30.088 asserts 15 15 15 0 n/a 00:04:30.088 00:04:30.088 Elapsed time = 0.060 seconds 00:04:30.088 00:04:30.088 real 0m0.180s 00:04:30.088 user 0m0.097s 00:04:30.088 sys 0m0.082s 00:04:30.088 00:42:02 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.088 00:42:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:30.088 ************************************ 00:04:30.088 END TEST env_mem_callbacks 00:04:30.088 ************************************ 00:04:30.088 00:04:30.088 real 0m14.024s 00:04:30.088 user 0m11.108s 00:04:30.088 sys 0m1.934s 00:04:30.088 00:42:02 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.088 00:42:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.088 ************************************ 00:04:30.088 END TEST env 00:04:30.088 ************************************ 00:04:30.088 00:42:02 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:30.088 00:42:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.088 00:42:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.088 00:42:02 -- common/autotest_common.sh@10 -- # set +x 00:04:30.088 ************************************ 00:04:30.088 START TEST rpc 00:04:30.088 ************************************ 00:04:30.088 00:42:02 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:30.088 * Looking for test storage... 00:04:30.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:30.089 00:42:02 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:30.089 00:42:02 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:30.089 00:42:02 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:30.089 00:42:02 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:30.089 00:42:02 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.089 00:42:02 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.089 00:42:02 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.089 00:42:02 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.089 00:42:02 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.089 00:42:02 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.089 00:42:02 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.089 00:42:02 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.089 00:42:02 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.089 00:42:02 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.089 00:42:02 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.089 00:42:02 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:30.089 00:42:02 rpc -- scripts/common.sh@345 -- # : 1 00:04:30.089 00:42:02 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.089 00:42:02 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.089 00:42:02 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:30.089 00:42:02 rpc -- scripts/common.sh@353 -- # local d=1 00:04:30.089 00:42:02 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.089 00:42:02 rpc -- scripts/common.sh@355 -- # echo 1 00:04:30.089 00:42:02 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.089 00:42:02 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:30.089 00:42:02 rpc -- scripts/common.sh@353 -- # local d=2 00:04:30.089 00:42:02 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.089 00:42:02 rpc -- scripts/common.sh@355 -- # echo 2 00:04:30.089 00:42:02 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.089 00:42:02 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.089 00:42:02 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.089 00:42:02 rpc -- scripts/common.sh@368 -- # return 0 00:04:30.089 00:42:02 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.089 00:42:02 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:30.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.089 --rc genhtml_branch_coverage=1 00:04:30.089 --rc genhtml_function_coverage=1 00:04:30.089 --rc genhtml_legend=1 00:04:30.089 --rc geninfo_all_blocks=1 00:04:30.089 --rc geninfo_unexecuted_blocks=1 00:04:30.089 00:04:30.089 ' 00:04:30.089 00:42:02 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:30.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.089 --rc genhtml_branch_coverage=1 00:04:30.089 --rc genhtml_function_coverage=1 00:04:30.089 --rc genhtml_legend=1 00:04:30.089 --rc geninfo_all_blocks=1 00:04:30.089 --rc geninfo_unexecuted_blocks=1 00:04:30.089 00:04:30.089 ' 00:04:30.089 00:42:02 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:30.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.089 --rc genhtml_branch_coverage=1 00:04:30.089 --rc genhtml_function_coverage=1 00:04:30.089 --rc genhtml_legend=1 00:04:30.089 --rc geninfo_all_blocks=1 00:04:30.089 --rc geninfo_unexecuted_blocks=1 00:04:30.089 00:04:30.089 ' 00:04:30.089 00:42:02 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:30.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.089 --rc genhtml_branch_coverage=1 00:04:30.089 --rc genhtml_function_coverage=1 00:04:30.089 --rc genhtml_legend=1 00:04:30.089 --rc geninfo_all_blocks=1 00:04:30.089 --rc geninfo_unexecuted_blocks=1 00:04:30.089 00:04:30.089 ' 00:04:30.089 00:42:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2815568 00:04:30.089 00:42:02 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:30.089 00:42:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.089 00:42:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2815568 00:04:30.089 00:42:02 rpc -- common/autotest_common.sh@835 -- # '[' -z 2815568 ']' 00:04:30.089 00:42:02 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.089 00:42:02 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.089 00:42:02 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.089 00:42:02 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.089 00:42:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.347 [2024-12-06 00:42:02.879959] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:30.347 [2024-12-06 00:42:02.880107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2815568 ] 00:04:30.347 [2024-12-06 00:42:03.025437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.605 [2024-12-06 00:42:03.158863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:30.605 [2024-12-06 00:42:03.158959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2815568' to capture a snapshot of events at runtime. 00:04:30.605 [2024-12-06 00:42:03.158989] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:30.605 [2024-12-06 00:42:03.159013] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:30.605 [2024-12-06 00:42:03.159044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2815568 for offline analysis/debug. 00:04:30.605 [2024-12-06 00:42:03.160748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.539 00:42:04 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.539 00:42:04 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:31.539 00:42:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:31.539 00:42:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:31.539 00:42:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:31.539 00:42:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:31.539 00:42:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.539 00:42:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.539 00:42:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.539 ************************************ 00:04:31.539 START TEST rpc_integrity 00:04:31.539 ************************************ 00:04:31.539 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:31.539 00:42:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:31.539 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.539 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.539 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.539 00:42:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:31.539 00:42:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:31.539 00:42:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:31.539 00:42:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:31.539 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.539 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.539 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.539 00:42:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:31.539 00:42:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:31.539 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.539 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.539 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.539 00:42:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:31.539 { 00:04:31.539 "name": "Malloc0", 00:04:31.539 "aliases": [ 00:04:31.539 "651f9f89-cf3d-4682-b586-63d80df01651" 00:04:31.539 ], 00:04:31.539 "product_name": "Malloc disk", 00:04:31.539 "block_size": 512, 00:04:31.539 "num_blocks": 16384, 00:04:31.539 "uuid": "651f9f89-cf3d-4682-b586-63d80df01651", 00:04:31.539 "assigned_rate_limits": { 00:04:31.539 "rw_ios_per_sec": 0, 00:04:31.539 "rw_mbytes_per_sec": 0, 00:04:31.539 "r_mbytes_per_sec": 0, 00:04:31.539 "w_mbytes_per_sec": 0 00:04:31.539 }, 00:04:31.539 "claimed": false, 00:04:31.539 "zoned": false, 00:04:31.539 "supported_io_types": { 00:04:31.539 "read": true, 00:04:31.539 "write": true, 00:04:31.539 "unmap": true, 00:04:31.539 "flush": true, 00:04:31.539 "reset": true, 00:04:31.539 "nvme_admin": false, 00:04:31.539 "nvme_io": false, 00:04:31.539 "nvme_io_md": false, 00:04:31.539 "write_zeroes": true, 00:04:31.539 "zcopy": true, 00:04:31.539 "get_zone_info": false, 00:04:31.539 "zone_management": false, 00:04:31.539 "zone_append": false, 00:04:31.539 "compare": false, 00:04:31.539 "compare_and_write": false, 00:04:31.539 "abort": true, 00:04:31.539 "seek_hole": false, 00:04:31.539 "seek_data": false, 00:04:31.539 "copy": true, 00:04:31.539 "nvme_iov_md": false 00:04:31.539 }, 00:04:31.539 "memory_domains": [ 00:04:31.539 { 00:04:31.539 "dma_device_id": "system", 00:04:31.539 "dma_device_type": 1 00:04:31.539 }, 00:04:31.539 { 00:04:31.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.539 "dma_device_type": 2 00:04:31.539 } 00:04:31.539 ], 00:04:31.539 "driver_specific": {} 00:04:31.539 } 00:04:31.539 ]' 00:04:31.539 00:42:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:31.802 00:42:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:31.802 00:42:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:31.802 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.802 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.802 [2024-12-06 00:42:04.284742] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:31.802 [2024-12-06 00:42:04.284828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:31.802 [2024-12-06 00:42:04.284891] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:04:31.802 [2024-12-06 00:42:04.284917] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:31.802 [2024-12-06 00:42:04.287744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:31.802 [2024-12-06 00:42:04.287783] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:31.802 Passthru0 00:04:31.802 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.802 00:42:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:31.802 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.802 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.802 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.802 00:42:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:31.802 { 00:04:31.802 "name": "Malloc0", 00:04:31.802 "aliases": [ 00:04:31.802 "651f9f89-cf3d-4682-b586-63d80df01651" 00:04:31.802 ], 00:04:31.802 "product_name": "Malloc disk", 00:04:31.802 "block_size": 512, 00:04:31.802 "num_blocks": 16384, 00:04:31.802 "uuid": "651f9f89-cf3d-4682-b586-63d80df01651", 00:04:31.802 "assigned_rate_limits": { 00:04:31.802 "rw_ios_per_sec": 0, 00:04:31.802 "rw_mbytes_per_sec": 0, 00:04:31.802 "r_mbytes_per_sec": 0, 00:04:31.802 "w_mbytes_per_sec": 0 00:04:31.802 }, 00:04:31.802 "claimed": true, 00:04:31.802 "claim_type": "exclusive_write", 00:04:31.802 "zoned": false, 00:04:31.802 "supported_io_types": { 00:04:31.802 "read": true, 00:04:31.802 "write": true, 00:04:31.802 "unmap": true, 00:04:31.802 "flush": true, 00:04:31.802 "reset": true, 00:04:31.802 "nvme_admin": false, 00:04:31.802 "nvme_io": false, 00:04:31.802 "nvme_io_md": false, 00:04:31.802 "write_zeroes": true, 00:04:31.802 "zcopy": true, 00:04:31.802 "get_zone_info": false, 00:04:31.802 "zone_management": false, 00:04:31.802 "zone_append": false, 00:04:31.802 "compare": false, 00:04:31.802 "compare_and_write": false, 00:04:31.802 "abort": true, 00:04:31.802 "seek_hole": false, 00:04:31.802 "seek_data": false, 00:04:31.802 "copy": true, 00:04:31.802 "nvme_iov_md": false 00:04:31.802 }, 00:04:31.802 "memory_domains": [ 00:04:31.802 { 00:04:31.802 "dma_device_id": "system", 00:04:31.802 "dma_device_type": 1 00:04:31.802 }, 00:04:31.802 { 00:04:31.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.802 "dma_device_type": 2 00:04:31.802 } 00:04:31.802 ], 00:04:31.802 "driver_specific": {} 00:04:31.802 }, 00:04:31.802 { 00:04:31.802 "name": "Passthru0", 00:04:31.802 "aliases": [ 00:04:31.802 "2c4fa8da-60f5-5d8d-be28-0d523deeb948" 00:04:31.802 ], 00:04:31.802 "product_name": "passthru", 00:04:31.802 "block_size": 512, 00:04:31.802 "num_blocks": 16384, 00:04:31.802 "uuid": "2c4fa8da-60f5-5d8d-be28-0d523deeb948", 00:04:31.802 "assigned_rate_limits": { 00:04:31.802 "rw_ios_per_sec": 0, 00:04:31.802 "rw_mbytes_per_sec": 0, 00:04:31.802 "r_mbytes_per_sec": 0, 00:04:31.802 "w_mbytes_per_sec": 0 00:04:31.802 }, 00:04:31.802 "claimed": false, 00:04:31.802 "zoned": false, 00:04:31.802 "supported_io_types": { 00:04:31.802 "read": true, 00:04:31.802 "write": true, 00:04:31.802 "unmap": true, 00:04:31.802 "flush": true, 00:04:31.802 "reset": true, 00:04:31.802 "nvme_admin": false, 00:04:31.802 "nvme_io": false, 00:04:31.802 "nvme_io_md": false, 00:04:31.802 "write_zeroes": true, 00:04:31.802 "zcopy": true, 00:04:31.802 "get_zone_info": false, 00:04:31.802 "zone_management": false, 00:04:31.802 "zone_append": false, 00:04:31.802 "compare": false, 00:04:31.802 "compare_and_write": false, 00:04:31.802 "abort": true, 00:04:31.802 "seek_hole": false, 00:04:31.802 "seek_data": false, 00:04:31.802 "copy": true, 00:04:31.802 "nvme_iov_md": false 00:04:31.802 }, 00:04:31.802 "memory_domains": [ 00:04:31.802 { 00:04:31.802 "dma_device_id": "system", 00:04:31.802 "dma_device_type": 1 00:04:31.802 }, 00:04:31.802 { 00:04:31.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.802 "dma_device_type": 2 00:04:31.802 } 00:04:31.802 ], 00:04:31.802 "driver_specific": { 00:04:31.802 "passthru": { 00:04:31.802 "name": "Passthru0", 00:04:31.802 "base_bdev_name": "Malloc0" 00:04:31.802 } 00:04:31.802 } 00:04:31.802 } 00:04:31.802 ]' 00:04:31.802 00:42:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:31.802 00:42:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:31.802 00:42:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:31.802 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.802 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.802 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.802 00:42:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:31.802 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.802 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.802 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.802 00:42:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:31.802 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.802 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.802 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.802 00:42:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:31.802 00:42:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:31.802 00:42:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:31.802 00:04:31.802 real 0m0.266s 00:04:31.802 user 0m0.156s 00:04:31.802 sys 0m0.019s 00:04:31.802 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.802 00:42:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.802 ************************************ 00:04:31.802 END TEST rpc_integrity 00:04:31.802 ************************************ 00:04:31.802 00:42:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:31.802 00:42:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.802 00:42:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.802 00:42:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.802 ************************************ 00:04:31.802 START TEST rpc_plugins 00:04:31.802 ************************************ 00:04:31.802 00:42:04 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:31.802 00:42:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:31.802 00:42:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.802 00:42:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:31.802 00:42:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.802 00:42:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:31.802 00:42:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:31.802 00:42:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.802 00:42:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:31.802 00:42:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.802 00:42:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:31.802 { 00:04:31.802 "name": "Malloc1", 00:04:31.802 "aliases": [ 00:04:31.802 "ab004aed-cb6a-495c-a598-893ef33b2ee1" 00:04:31.802 ], 00:04:31.802 "product_name": "Malloc disk", 00:04:31.802 "block_size": 4096, 00:04:31.802 "num_blocks": 256, 00:04:31.802 "uuid": "ab004aed-cb6a-495c-a598-893ef33b2ee1", 00:04:31.802 "assigned_rate_limits": { 00:04:31.802 "rw_ios_per_sec": 0, 00:04:31.802 "rw_mbytes_per_sec": 0, 00:04:31.802 "r_mbytes_per_sec": 0, 00:04:31.802 "w_mbytes_per_sec": 0 00:04:31.802 }, 00:04:31.802 "claimed": false, 00:04:31.802 "zoned": false, 00:04:31.802 "supported_io_types": { 00:04:31.802 "read": true, 00:04:31.802 "write": true, 00:04:31.802 "unmap": true, 00:04:31.802 "flush": true, 00:04:31.802 "reset": true, 00:04:31.802 "nvme_admin": false, 00:04:31.802 "nvme_io": false, 00:04:31.802 "nvme_io_md": false, 00:04:31.802 "write_zeroes": true, 00:04:31.802 "zcopy": true, 00:04:31.802 "get_zone_info": false, 00:04:31.802 "zone_management": false, 00:04:31.802 "zone_append": false, 00:04:31.802 "compare": false, 00:04:31.802 "compare_and_write": false, 00:04:31.802 "abort": true, 00:04:31.802 "seek_hole": false, 00:04:31.802 "seek_data": false, 00:04:31.802 "copy": true, 00:04:31.802 "nvme_iov_md": false 00:04:31.802 }, 00:04:31.802 "memory_domains": [ 00:04:31.802 { 00:04:31.802 "dma_device_id": "system", 00:04:31.802 "dma_device_type": 1 00:04:31.802 }, 00:04:31.802 { 00:04:31.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.802 "dma_device_type": 2 00:04:31.802 } 00:04:31.802 ], 00:04:31.802 "driver_specific": {} 00:04:31.802 } 00:04:31.802 ]' 00:04:31.802 00:42:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:32.058 00:42:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:32.058 00:42:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:32.058 00:42:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.058 00:42:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.058 00:42:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.058 00:42:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:32.058 00:42:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.058 00:42:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.058 00:42:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.058 00:42:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:32.058 00:42:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:32.058 00:42:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:32.058 00:04:32.058 real 0m0.120s 00:04:32.058 user 0m0.077s 00:04:32.058 sys 0m0.008s 00:04:32.058 00:42:04 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.058 00:42:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.058 ************************************ 00:04:32.058 END TEST rpc_plugins 00:04:32.058 ************************************ 00:04:32.058 00:42:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:32.058 00:42:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.058 00:42:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.058 00:42:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.058 ************************************ 00:04:32.058 START TEST rpc_trace_cmd_test 00:04:32.058 ************************************ 00:04:32.058 00:42:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:32.058 00:42:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:32.058 00:42:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:32.058 00:42:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.058 00:42:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:32.058 00:42:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.058 00:42:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:32.058 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2815568", 00:04:32.058 "tpoint_group_mask": "0x8", 00:04:32.058 "iscsi_conn": { 00:04:32.058 "mask": "0x2", 00:04:32.058 "tpoint_mask": "0x0" 00:04:32.058 }, 00:04:32.058 "scsi": { 00:04:32.058 "mask": "0x4", 00:04:32.058 "tpoint_mask": "0x0" 00:04:32.058 }, 00:04:32.058 "bdev": { 00:04:32.058 "mask": "0x8", 00:04:32.058 "tpoint_mask": "0xffffffffffffffff" 00:04:32.058 }, 00:04:32.058 "nvmf_rdma": { 00:04:32.058 "mask": "0x10", 00:04:32.058 "tpoint_mask": "0x0" 00:04:32.058 }, 00:04:32.058 "nvmf_tcp": { 00:04:32.058 "mask": "0x20", 00:04:32.058 "tpoint_mask": "0x0" 00:04:32.058 }, 00:04:32.058 "ftl": { 00:04:32.058 "mask": "0x40", 00:04:32.058 "tpoint_mask": "0x0" 00:04:32.058 }, 00:04:32.058 "blobfs": { 00:04:32.058 "mask": "0x80", 00:04:32.058 "tpoint_mask": "0x0" 00:04:32.058 }, 00:04:32.058 "dsa": { 00:04:32.058 "mask": "0x200", 00:04:32.058 "tpoint_mask": "0x0" 00:04:32.058 }, 00:04:32.058 "thread": { 00:04:32.058 "mask": "0x400", 00:04:32.058 "tpoint_mask": "0x0" 00:04:32.058 }, 00:04:32.058 "nvme_pcie": { 00:04:32.058 "mask": "0x800", 00:04:32.058 "tpoint_mask": "0x0" 00:04:32.058 }, 00:04:32.058 "iaa": { 00:04:32.058 "mask": "0x1000", 00:04:32.058 "tpoint_mask": "0x0" 00:04:32.058 }, 00:04:32.058 "nvme_tcp": { 00:04:32.058 "mask": "0x2000", 00:04:32.058 "tpoint_mask": "0x0" 00:04:32.058 }, 00:04:32.058 "bdev_nvme": { 00:04:32.058 "mask": "0x4000", 00:04:32.058 "tpoint_mask": "0x0" 00:04:32.058 }, 00:04:32.058 "sock": { 00:04:32.058 "mask": "0x8000", 00:04:32.059 "tpoint_mask": "0x0" 00:04:32.059 }, 00:04:32.059 "blob": { 00:04:32.059 "mask": "0x10000", 00:04:32.059 "tpoint_mask": "0x0" 00:04:32.059 }, 00:04:32.059 "bdev_raid": { 00:04:32.059 "mask": "0x20000", 00:04:32.059 "tpoint_mask": "0x0" 00:04:32.059 }, 00:04:32.059 "scheduler": { 00:04:32.059 "mask": "0x40000", 00:04:32.059 "tpoint_mask": "0x0" 00:04:32.059 } 00:04:32.059 }' 00:04:32.059 00:42:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:32.059 00:42:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:32.059 00:42:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:32.059 00:42:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:32.059 00:42:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:32.059 00:42:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:32.059 00:42:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:32.316 00:42:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:32.316 00:42:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:32.316 00:42:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:32.316 00:04:32.316 real 0m0.199s 00:04:32.316 user 0m0.176s 00:04:32.316 sys 0m0.017s 00:04:32.316 00:42:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.316 00:42:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:32.316 ************************************ 00:04:32.316 END TEST rpc_trace_cmd_test 00:04:32.316 ************************************ 00:04:32.316 00:42:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:32.316 00:42:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:32.316 00:42:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:32.316 00:42:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.316 00:42:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.316 00:42:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.316 ************************************ 00:04:32.316 START TEST rpc_daemon_integrity 00:04:32.316 ************************************ 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:32.316 { 00:04:32.316 "name": "Malloc2", 00:04:32.316 "aliases": [ 00:04:32.316 "f2105ed8-2b65-490f-8550-5003a19f703a" 00:04:32.316 ], 00:04:32.316 "product_name": "Malloc disk", 00:04:32.316 "block_size": 512, 00:04:32.316 "num_blocks": 16384, 00:04:32.316 "uuid": "f2105ed8-2b65-490f-8550-5003a19f703a", 00:04:32.316 "assigned_rate_limits": { 00:04:32.316 "rw_ios_per_sec": 0, 00:04:32.316 "rw_mbytes_per_sec": 0, 00:04:32.316 "r_mbytes_per_sec": 0, 00:04:32.316 "w_mbytes_per_sec": 0 00:04:32.316 }, 00:04:32.316 "claimed": false, 00:04:32.316 "zoned": false, 00:04:32.316 "supported_io_types": { 00:04:32.316 "read": true, 00:04:32.316 "write": true, 00:04:32.316 "unmap": true, 00:04:32.316 "flush": true, 00:04:32.316 "reset": true, 00:04:32.316 "nvme_admin": false, 00:04:32.316 "nvme_io": false, 00:04:32.316 "nvme_io_md": false, 00:04:32.316 "write_zeroes": true, 00:04:32.316 "zcopy": true, 00:04:32.316 "get_zone_info": false, 00:04:32.316 "zone_management": false, 00:04:32.316 "zone_append": false, 00:04:32.316 "compare": false, 00:04:32.316 "compare_and_write": false, 00:04:32.316 "abort": true, 00:04:32.316 "seek_hole": false, 00:04:32.316 "seek_data": false, 00:04:32.316 "copy": true, 00:04:32.316 "nvme_iov_md": false 00:04:32.316 }, 00:04:32.316 "memory_domains": [ 00:04:32.316 { 00:04:32.316 "dma_device_id": "system", 00:04:32.316 "dma_device_type": 1 00:04:32.316 }, 00:04:32.316 { 00:04:32.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.316 "dma_device_type": 2 00:04:32.316 } 00:04:32.316 ], 00:04:32.316 "driver_specific": {} 00:04:32.316 } 00:04:32.316 ]' 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.316 00:42:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.316 [2024-12-06 00:42:05.003215] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:32.316 [2024-12-06 00:42:05.003281] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:32.316 [2024-12-06 00:42:05.003327] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023a80 00:04:32.317 [2024-12-06 00:42:05.003353] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:32.317 [2024-12-06 00:42:05.006147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:32.317 [2024-12-06 00:42:05.006185] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:32.317 Passthru0 00:04:32.317 00:42:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.317 00:42:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:32.317 00:42:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.317 00:42:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.574 00:42:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.574 00:42:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:32.574 { 00:04:32.574 "name": "Malloc2", 00:04:32.574 "aliases": [ 00:04:32.574 "f2105ed8-2b65-490f-8550-5003a19f703a" 00:04:32.574 ], 00:04:32.574 "product_name": "Malloc disk", 00:04:32.574 "block_size": 512, 00:04:32.574 "num_blocks": 16384, 00:04:32.574 "uuid": "f2105ed8-2b65-490f-8550-5003a19f703a", 00:04:32.574 "assigned_rate_limits": { 00:04:32.574 "rw_ios_per_sec": 0, 00:04:32.574 "rw_mbytes_per_sec": 0, 00:04:32.574 "r_mbytes_per_sec": 0, 00:04:32.574 "w_mbytes_per_sec": 0 00:04:32.574 }, 00:04:32.574 "claimed": true, 00:04:32.574 "claim_type": "exclusive_write", 00:04:32.574 "zoned": false, 00:04:32.574 "supported_io_types": { 00:04:32.574 "read": true, 00:04:32.574 "write": true, 00:04:32.574 "unmap": true, 00:04:32.574 "flush": true, 00:04:32.574 "reset": true, 00:04:32.574 "nvme_admin": false, 00:04:32.574 "nvme_io": false, 00:04:32.574 "nvme_io_md": false, 00:04:32.574 "write_zeroes": true, 00:04:32.574 "zcopy": true, 00:04:32.574 "get_zone_info": false, 00:04:32.574 "zone_management": false, 00:04:32.574 "zone_append": false, 00:04:32.574 "compare": false, 00:04:32.574 "compare_and_write": false, 00:04:32.574 "abort": true, 00:04:32.574 "seek_hole": false, 00:04:32.574 "seek_data": false, 00:04:32.574 "copy": true, 00:04:32.574 "nvme_iov_md": false 00:04:32.574 }, 00:04:32.574 "memory_domains": [ 00:04:32.574 { 00:04:32.574 "dma_device_id": "system", 00:04:32.574 "dma_device_type": 1 00:04:32.574 }, 00:04:32.574 { 00:04:32.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.574 "dma_device_type": 2 00:04:32.574 } 00:04:32.574 ], 00:04:32.574 "driver_specific": {} 00:04:32.574 }, 00:04:32.574 { 00:04:32.574 "name": "Passthru0", 00:04:32.574 "aliases": [ 00:04:32.574 "8548ec73-c061-5ffb-89cb-ae4062389ba5" 00:04:32.574 ], 00:04:32.574 "product_name": "passthru", 00:04:32.574 "block_size": 512, 00:04:32.574 "num_blocks": 16384, 00:04:32.574 "uuid": "8548ec73-c061-5ffb-89cb-ae4062389ba5", 00:04:32.574 "assigned_rate_limits": { 00:04:32.574 "rw_ios_per_sec": 0, 00:04:32.574 "rw_mbytes_per_sec": 0, 00:04:32.574 "r_mbytes_per_sec": 0, 00:04:32.574 "w_mbytes_per_sec": 0 00:04:32.574 }, 00:04:32.574 "claimed": false, 00:04:32.574 "zoned": false, 00:04:32.574 "supported_io_types": { 00:04:32.574 "read": true, 00:04:32.574 "write": true, 00:04:32.574 "unmap": true, 00:04:32.574 "flush": true, 00:04:32.574 "reset": true, 00:04:32.574 "nvme_admin": false, 00:04:32.574 "nvme_io": false, 00:04:32.574 "nvme_io_md": false, 00:04:32.574 "write_zeroes": true, 00:04:32.574 "zcopy": true, 00:04:32.574 "get_zone_info": false, 00:04:32.574 "zone_management": false, 00:04:32.574 "zone_append": false, 00:04:32.574 "compare": false, 00:04:32.574 "compare_and_write": false, 00:04:32.574 "abort": true, 00:04:32.574 "seek_hole": false, 00:04:32.574 "seek_data": false, 00:04:32.574 "copy": true, 00:04:32.574 "nvme_iov_md": false 00:04:32.574 }, 00:04:32.574 "memory_domains": [ 00:04:32.574 { 00:04:32.574 "dma_device_id": "system", 00:04:32.574 "dma_device_type": 1 00:04:32.574 }, 00:04:32.574 { 00:04:32.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.574 "dma_device_type": 2 00:04:32.574 } 00:04:32.574 ], 00:04:32.574 "driver_specific": { 00:04:32.574 "passthru": { 00:04:32.574 "name": "Passthru0", 00:04:32.574 "base_bdev_name": "Malloc2" 00:04:32.574 } 00:04:32.574 } 00:04:32.574 } 00:04:32.574 ]' 00:04:32.574 00:42:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:32.574 00:42:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:32.574 00:42:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:32.574 00:42:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.574 00:42:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.574 00:42:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.574 00:42:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:32.574 00:42:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.574 00:42:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.574 00:42:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.574 00:42:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:32.574 00:42:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.574 00:42:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.574 00:42:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.574 00:42:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:32.574 00:42:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:32.574 00:42:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:32.574 00:04:32.574 real 0m0.258s 00:04:32.574 user 0m0.147s 00:04:32.574 sys 0m0.025s 00:04:32.574 00:42:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.574 00:42:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.574 ************************************ 00:04:32.574 END TEST rpc_daemon_integrity 00:04:32.574 ************************************ 00:04:32.574 00:42:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:32.574 00:42:05 rpc -- rpc/rpc.sh@84 -- # killprocess 2815568 00:04:32.574 00:42:05 rpc -- common/autotest_common.sh@954 -- # '[' -z 2815568 ']' 00:04:32.574 00:42:05 rpc -- common/autotest_common.sh@958 -- # kill -0 2815568 00:04:32.574 00:42:05 rpc -- common/autotest_common.sh@959 -- # uname 00:04:32.574 00:42:05 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.574 00:42:05 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2815568 00:04:32.574 00:42:05 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.574 00:42:05 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.574 00:42:05 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2815568' 00:04:32.574 killing process with pid 2815568 00:04:32.574 00:42:05 rpc -- common/autotest_common.sh@973 -- # kill 2815568 00:04:32.574 00:42:05 rpc -- common/autotest_common.sh@978 -- # wait 2815568 00:04:35.161 00:04:35.161 real 0m5.014s 00:04:35.161 user 0m5.552s 00:04:35.161 sys 0m0.825s 00:04:35.161 00:42:07 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.161 00:42:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.161 ************************************ 00:04:35.161 END TEST rpc 00:04:35.161 ************************************ 00:04:35.161 00:42:07 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:35.161 00:42:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.161 00:42:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.161 00:42:07 -- common/autotest_common.sh@10 -- # set +x 00:04:35.161 ************************************ 00:04:35.161 START TEST skip_rpc 00:04:35.161 ************************************ 00:04:35.161 00:42:07 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:35.161 * Looking for test storage... 00:04:35.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:35.161 00:42:07 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:35.161 00:42:07 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:35.161 00:42:07 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:35.161 00:42:07 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:35.161 00:42:07 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.161 00:42:07 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.161 00:42:07 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.161 00:42:07 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.161 00:42:07 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.161 00:42:07 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.161 00:42:07 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.161 00:42:07 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.161 00:42:07 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.161 00:42:07 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.161 00:42:07 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.161 00:42:07 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:35.161 00:42:07 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:35.161 00:42:07 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.161 00:42:07 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.162 00:42:07 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:35.162 00:42:07 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:35.162 00:42:07 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.162 00:42:07 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:35.162 00:42:07 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.162 00:42:07 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:35.162 00:42:07 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:35.162 00:42:07 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.162 00:42:07 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:35.162 00:42:07 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.162 00:42:07 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.162 00:42:07 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.162 00:42:07 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:35.162 00:42:07 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.162 00:42:07 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:35.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.162 --rc genhtml_branch_coverage=1 00:04:35.162 --rc genhtml_function_coverage=1 00:04:35.162 --rc genhtml_legend=1 00:04:35.162 --rc geninfo_all_blocks=1 00:04:35.162 --rc geninfo_unexecuted_blocks=1 00:04:35.162 00:04:35.162 ' 00:04:35.162 00:42:07 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:35.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.162 --rc genhtml_branch_coverage=1 00:04:35.162 --rc genhtml_function_coverage=1 00:04:35.162 --rc genhtml_legend=1 00:04:35.162 --rc geninfo_all_blocks=1 00:04:35.162 --rc geninfo_unexecuted_blocks=1 00:04:35.162 00:04:35.162 ' 00:04:35.162 00:42:07 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:35.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.162 --rc genhtml_branch_coverage=1 00:04:35.162 --rc genhtml_function_coverage=1 00:04:35.162 --rc genhtml_legend=1 00:04:35.162 --rc geninfo_all_blocks=1 00:04:35.162 --rc geninfo_unexecuted_blocks=1 00:04:35.162 00:04:35.162 ' 00:04:35.162 00:42:07 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:35.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.162 --rc genhtml_branch_coverage=1 00:04:35.162 --rc genhtml_function_coverage=1 00:04:35.162 --rc genhtml_legend=1 00:04:35.162 --rc geninfo_all_blocks=1 00:04:35.162 --rc geninfo_unexecuted_blocks=1 00:04:35.162 00:04:35.162 ' 00:04:35.162 00:42:07 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:35.162 00:42:07 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:35.162 00:42:07 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:35.162 00:42:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.162 00:42:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.162 00:42:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.162 ************************************ 00:04:35.162 START TEST skip_rpc 00:04:35.162 ************************************ 00:04:35.419 00:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:35.419 00:42:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2816614 00:04:35.419 00:42:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:35.419 00:42:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.419 00:42:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:35.419 [2024-12-06 00:42:07.968008] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:35.419 [2024-12-06 00:42:07.968167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2816614 ] 00:04:35.419 [2024-12-06 00:42:08.110160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.677 [2024-12-06 00:42:08.247974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2816614 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2816614 ']' 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2816614 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2816614 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2816614' 00:04:40.942 killing process with pid 2816614 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2816614 00:04:40.942 00:42:12 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2816614 00:04:42.846 00:04:42.846 real 0m7.480s 00:04:42.846 user 0m6.965s 00:04:42.846 sys 0m0.509s 00:04:42.846 00:42:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.846 00:42:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.846 ************************************ 00:04:42.846 END TEST skip_rpc 00:04:42.847 ************************************ 00:04:42.847 00:42:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:42.847 00:42:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.847 00:42:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.847 00:42:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.847 ************************************ 00:04:42.847 START TEST skip_rpc_with_json 00:04:42.847 ************************************ 00:04:42.847 00:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:42.847 00:42:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:42.847 00:42:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2817755 00:04:42.847 00:42:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.847 00:42:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.847 00:42:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2817755 00:04:42.847 00:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2817755 ']' 00:04:42.847 00:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.847 00:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.847 00:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.847 00:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.847 00:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.847 [2024-12-06 00:42:15.503474] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:42.847 [2024-12-06 00:42:15.503612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817755 ] 00:04:43.105 [2024-12-06 00:42:15.647181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.105 [2024-12-06 00:42:15.784479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.481 00:42:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.481 00:42:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:44.481 00:42:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:44.481 00:42:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.481 00:42:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.481 [2024-12-06 00:42:16.758447] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:44.481 request: 00:04:44.481 { 00:04:44.481 "trtype": "tcp", 00:04:44.481 "method": "nvmf_get_transports", 00:04:44.481 "req_id": 1 00:04:44.481 } 00:04:44.481 Got JSON-RPC error response 00:04:44.481 response: 00:04:44.481 { 00:04:44.481 "code": -19, 00:04:44.481 "message": "No such device" 00:04:44.481 } 00:04:44.481 00:42:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:44.481 00:42:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:44.481 00:42:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.481 00:42:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.481 [2024-12-06 00:42:16.766599] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:44.481 00:42:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.481 00:42:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:44.481 00:42:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.481 00:42:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.481 00:42:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.481 00:42:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:44.481 { 00:04:44.481 "subsystems": [ 00:04:44.481 { 00:04:44.481 "subsystem": "fsdev", 00:04:44.481 "config": [ 00:04:44.481 { 00:04:44.481 "method": "fsdev_set_opts", 00:04:44.481 "params": { 00:04:44.481 "fsdev_io_pool_size": 65535, 00:04:44.481 "fsdev_io_cache_size": 256 00:04:44.481 } 00:04:44.481 } 00:04:44.481 ] 00:04:44.481 }, 00:04:44.481 { 00:04:44.481 "subsystem": "keyring", 00:04:44.481 "config": [] 00:04:44.481 }, 00:04:44.481 { 00:04:44.481 "subsystem": "iobuf", 00:04:44.481 "config": [ 00:04:44.481 { 00:04:44.481 "method": "iobuf_set_options", 00:04:44.481 "params": { 00:04:44.481 "small_pool_count": 8192, 00:04:44.481 "large_pool_count": 1024, 00:04:44.481 "small_bufsize": 8192, 00:04:44.481 "large_bufsize": 135168, 00:04:44.481 "enable_numa": false 00:04:44.481 } 00:04:44.481 } 00:04:44.481 ] 00:04:44.481 }, 00:04:44.481 { 00:04:44.481 "subsystem": "sock", 00:04:44.481 "config": [ 00:04:44.481 { 00:04:44.481 "method": "sock_set_default_impl", 00:04:44.481 "params": { 00:04:44.481 "impl_name": "posix" 00:04:44.481 } 00:04:44.481 }, 00:04:44.481 { 00:04:44.481 "method": "sock_impl_set_options", 00:04:44.481 "params": { 00:04:44.481 "impl_name": "ssl", 00:04:44.481 "recv_buf_size": 4096, 00:04:44.481 "send_buf_size": 4096, 00:04:44.481 "enable_recv_pipe": true, 00:04:44.481 "enable_quickack": false, 00:04:44.481 "enable_placement_id": 0, 00:04:44.481 "enable_zerocopy_send_server": true, 00:04:44.481 "enable_zerocopy_send_client": false, 00:04:44.481 "zerocopy_threshold": 0, 00:04:44.481 "tls_version": 0, 00:04:44.481 "enable_ktls": false 00:04:44.481 } 00:04:44.481 }, 00:04:44.481 { 00:04:44.481 "method": "sock_impl_set_options", 00:04:44.481 "params": { 00:04:44.481 "impl_name": "posix", 00:04:44.481 "recv_buf_size": 2097152, 00:04:44.481 "send_buf_size": 2097152, 00:04:44.481 "enable_recv_pipe": true, 00:04:44.481 "enable_quickack": false, 00:04:44.481 "enable_placement_id": 0, 00:04:44.481 "enable_zerocopy_send_server": true, 00:04:44.481 "enable_zerocopy_send_client": false, 00:04:44.481 "zerocopy_threshold": 0, 00:04:44.481 "tls_version": 0, 00:04:44.481 "enable_ktls": false 00:04:44.481 } 00:04:44.481 } 00:04:44.481 ] 00:04:44.481 }, 00:04:44.481 { 00:04:44.481 "subsystem": "vmd", 00:04:44.481 "config": [] 00:04:44.481 }, 00:04:44.482 { 00:04:44.482 "subsystem": "accel", 00:04:44.482 "config": [ 00:04:44.482 { 00:04:44.482 "method": "accel_set_options", 00:04:44.482 "params": { 00:04:44.482 "small_cache_size": 128, 00:04:44.482 "large_cache_size": 16, 00:04:44.482 "task_count": 2048, 00:04:44.482 "sequence_count": 2048, 00:04:44.482 "buf_count": 2048 00:04:44.482 } 00:04:44.482 } 00:04:44.482 ] 00:04:44.482 }, 00:04:44.482 { 00:04:44.482 "subsystem": "bdev", 00:04:44.482 "config": [ 00:04:44.482 { 00:04:44.482 "method": "bdev_set_options", 00:04:44.482 "params": { 00:04:44.482 "bdev_io_pool_size": 65535, 00:04:44.482 "bdev_io_cache_size": 256, 00:04:44.482 "bdev_auto_examine": true, 00:04:44.482 "iobuf_small_cache_size": 128, 00:04:44.482 "iobuf_large_cache_size": 16 00:04:44.482 } 00:04:44.482 }, 00:04:44.482 { 00:04:44.482 "method": "bdev_raid_set_options", 00:04:44.482 "params": { 00:04:44.482 "process_window_size_kb": 1024, 00:04:44.482 "process_max_bandwidth_mb_sec": 0 00:04:44.482 } 00:04:44.482 }, 00:04:44.482 { 00:04:44.482 "method": "bdev_iscsi_set_options", 00:04:44.482 "params": { 00:04:44.482 "timeout_sec": 30 00:04:44.482 } 00:04:44.482 }, 00:04:44.482 { 00:04:44.482 "method": "bdev_nvme_set_options", 00:04:44.482 "params": { 00:04:44.482 "action_on_timeout": "none", 00:04:44.482 "timeout_us": 0, 00:04:44.482 "timeout_admin_us": 0, 00:04:44.482 "keep_alive_timeout_ms": 10000, 00:04:44.482 "arbitration_burst": 0, 00:04:44.482 "low_priority_weight": 0, 00:04:44.482 "medium_priority_weight": 0, 00:04:44.482 "high_priority_weight": 0, 00:04:44.482 "nvme_adminq_poll_period_us": 10000, 00:04:44.482 "nvme_ioq_poll_period_us": 0, 00:04:44.482 "io_queue_requests": 0, 00:04:44.482 "delay_cmd_submit": true, 00:04:44.482 "transport_retry_count": 4, 00:04:44.482 "bdev_retry_count": 3, 00:04:44.482 "transport_ack_timeout": 0, 00:04:44.482 "ctrlr_loss_timeout_sec": 0, 00:04:44.482 "reconnect_delay_sec": 0, 00:04:44.482 "fast_io_fail_timeout_sec": 0, 00:04:44.482 "disable_auto_failback": false, 00:04:44.482 "generate_uuids": false, 00:04:44.482 "transport_tos": 0, 00:04:44.482 "nvme_error_stat": false, 00:04:44.482 "rdma_srq_size": 0, 00:04:44.482 "io_path_stat": false, 00:04:44.482 "allow_accel_sequence": false, 00:04:44.482 "rdma_max_cq_size": 0, 00:04:44.482 "rdma_cm_event_timeout_ms": 0, 00:04:44.482 "dhchap_digests": [ 00:04:44.482 "sha256", 00:04:44.482 "sha384", 00:04:44.482 "sha512" 00:04:44.482 ], 00:04:44.482 "dhchap_dhgroups": [ 00:04:44.482 "null", 00:04:44.482 "ffdhe2048", 00:04:44.482 "ffdhe3072", 00:04:44.482 "ffdhe4096", 00:04:44.482 "ffdhe6144", 00:04:44.482 "ffdhe8192" 00:04:44.482 ] 00:04:44.482 } 00:04:44.482 }, 00:04:44.482 { 00:04:44.482 "method": "bdev_nvme_set_hotplug", 00:04:44.482 "params": { 00:04:44.482 "period_us": 100000, 00:04:44.482 "enable": false 00:04:44.482 } 00:04:44.482 }, 00:04:44.482 { 00:04:44.482 "method": "bdev_wait_for_examine" 00:04:44.482 } 00:04:44.482 ] 00:04:44.482 }, 00:04:44.482 { 00:04:44.482 "subsystem": "scsi", 00:04:44.482 "config": null 00:04:44.482 }, 00:04:44.482 { 00:04:44.482 "subsystem": "scheduler", 00:04:44.482 "config": [ 00:04:44.482 { 00:04:44.482 "method": "framework_set_scheduler", 00:04:44.482 "params": { 00:04:44.482 "name": "static" 00:04:44.482 } 00:04:44.482 } 00:04:44.482 ] 00:04:44.482 }, 00:04:44.482 { 00:04:44.482 "subsystem": "vhost_scsi", 00:04:44.482 "config": [] 00:04:44.482 }, 00:04:44.482 { 00:04:44.482 "subsystem": "vhost_blk", 00:04:44.482 "config": [] 00:04:44.482 }, 00:04:44.482 { 00:04:44.482 "subsystem": "ublk", 00:04:44.482 "config": [] 00:04:44.482 }, 00:04:44.482 { 00:04:44.482 "subsystem": "nbd", 00:04:44.482 "config": [] 00:04:44.482 }, 00:04:44.482 { 00:04:44.482 "subsystem": "nvmf", 00:04:44.482 "config": [ 00:04:44.482 { 00:04:44.482 "method": "nvmf_set_config", 00:04:44.482 "params": { 00:04:44.482 "discovery_filter": "match_any", 00:04:44.482 "admin_cmd_passthru": { 00:04:44.482 "identify_ctrlr": false 00:04:44.482 }, 00:04:44.482 "dhchap_digests": [ 00:04:44.482 "sha256", 00:04:44.482 "sha384", 00:04:44.482 "sha512" 00:04:44.482 ], 00:04:44.482 "dhchap_dhgroups": [ 00:04:44.482 "null", 00:04:44.482 "ffdhe2048", 00:04:44.482 "ffdhe3072", 00:04:44.482 "ffdhe4096", 00:04:44.482 "ffdhe6144", 00:04:44.482 "ffdhe8192" 00:04:44.482 ] 00:04:44.482 } 00:04:44.482 }, 00:04:44.482 { 00:04:44.482 "method": "nvmf_set_max_subsystems", 00:04:44.482 "params": { 00:04:44.482 "max_subsystems": 1024 00:04:44.482 } 00:04:44.482 }, 00:04:44.482 { 00:04:44.482 "method": "nvmf_set_crdt", 00:04:44.482 "params": { 00:04:44.482 "crdt1": 0, 00:04:44.482 "crdt2": 0, 00:04:44.482 "crdt3": 0 00:04:44.482 } 00:04:44.482 }, 00:04:44.482 { 00:04:44.482 "method": "nvmf_create_transport", 00:04:44.482 "params": { 00:04:44.482 "trtype": "TCP", 00:04:44.482 "max_queue_depth": 128, 00:04:44.482 "max_io_qpairs_per_ctrlr": 127, 00:04:44.482 "in_capsule_data_size": 4096, 00:04:44.482 "max_io_size": 131072, 00:04:44.482 "io_unit_size": 131072, 00:04:44.482 "max_aq_depth": 128, 00:04:44.482 "num_shared_buffers": 511, 00:04:44.482 "buf_cache_size": 4294967295, 00:04:44.482 "dif_insert_or_strip": false, 00:04:44.482 "zcopy": false, 00:04:44.482 "c2h_success": true, 00:04:44.482 "sock_priority": 0, 00:04:44.482 "abort_timeout_sec": 1, 00:04:44.482 "ack_timeout": 0, 00:04:44.482 "data_wr_pool_size": 0 00:04:44.482 } 00:04:44.482 } 00:04:44.482 ] 00:04:44.482 }, 00:04:44.482 { 00:04:44.482 "subsystem": "iscsi", 00:04:44.482 "config": [ 00:04:44.482 { 00:04:44.482 "method": "iscsi_set_options", 00:04:44.482 "params": { 00:04:44.482 "node_base": "iqn.2016-06.io.spdk", 00:04:44.482 "max_sessions": 128, 00:04:44.482 "max_connections_per_session": 2, 00:04:44.482 "max_queue_depth": 64, 00:04:44.482 "default_time2wait": 2, 00:04:44.482 "default_time2retain": 20, 00:04:44.482 "first_burst_length": 8192, 00:04:44.482 "immediate_data": true, 00:04:44.482 "allow_duplicated_isid": false, 00:04:44.482 "error_recovery_level": 0, 00:04:44.482 "nop_timeout": 60, 00:04:44.482 "nop_in_interval": 30, 00:04:44.482 "disable_chap": false, 00:04:44.482 "require_chap": false, 00:04:44.482 "mutual_chap": false, 00:04:44.482 "chap_group": 0, 00:04:44.482 "max_large_datain_per_connection": 64, 00:04:44.482 "max_r2t_per_connection": 4, 00:04:44.482 "pdu_pool_size": 36864, 00:04:44.482 "immediate_data_pool_size": 16384, 00:04:44.482 "data_out_pool_size": 2048 00:04:44.482 } 00:04:44.482 } 00:04:44.482 ] 00:04:44.482 } 00:04:44.482 ] 00:04:44.482 } 00:04:44.482 00:42:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:44.482 00:42:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2817755 00:04:44.482 00:42:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2817755 ']' 00:04:44.482 00:42:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2817755 00:04:44.482 00:42:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:44.482 00:42:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.482 00:42:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2817755 00:04:44.482 00:42:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.482 00:42:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.482 00:42:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2817755' 00:04:44.482 killing process with pid 2817755 00:04:44.482 00:42:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2817755 00:04:44.482 00:42:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2817755 00:04:47.016 00:42:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2818224 00:04:47.016 00:42:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:47.016 00:42:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:52.307 00:42:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2818224 00:04:52.307 00:42:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2818224 ']' 00:04:52.307 00:42:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2818224 00:04:52.307 00:42:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:52.307 00:42:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.307 00:42:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2818224 00:04:52.307 00:42:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.307 00:42:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.307 00:42:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2818224' 00:04:52.307 killing process with pid 2818224 00:04:52.307 00:42:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2818224 00:04:52.307 00:42:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2818224 00:04:54.207 00:42:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:54.207 00:42:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:54.207 00:04:54.207 real 0m11.499s 00:04:54.207 user 0m10.959s 00:04:54.207 sys 0m1.093s 00:04:54.207 00:42:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.207 00:42:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.207 ************************************ 00:04:54.207 END TEST skip_rpc_with_json 00:04:54.207 ************************************ 00:04:54.464 00:42:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:54.464 00:42:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.464 00:42:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.464 00:42:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.464 ************************************ 00:04:54.464 START TEST skip_rpc_with_delay 00:04:54.464 ************************************ 00:04:54.464 00:42:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:54.464 00:42:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.465 00:42:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:54.465 00:42:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.465 00:42:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.465 00:42:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.465 00:42:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.465 00:42:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.465 00:42:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.465 00:42:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.465 00:42:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.465 00:42:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:54.465 00:42:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.465 [2024-12-06 00:42:27.046835] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:54.465 00:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:54.465 00:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:54.465 00:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:54.465 00:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:54.465 00:04:54.465 real 0m0.150s 00:04:54.465 user 0m0.081s 00:04:54.465 sys 0m0.068s 00:04:54.465 00:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.465 00:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:54.465 ************************************ 00:04:54.465 END TEST skip_rpc_with_delay 00:04:54.465 ************************************ 00:04:54.465 00:42:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:54.465 00:42:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:54.465 00:42:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:54.465 00:42:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.465 00:42:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.465 00:42:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.465 ************************************ 00:04:54.465 START TEST exit_on_failed_rpc_init 00:04:54.465 ************************************ 00:04:54.465 00:42:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:54.465 00:42:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2819155 00:04:54.465 00:42:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.465 00:42:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2819155 00:04:54.465 00:42:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2819155 ']' 00:04:54.465 00:42:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.465 00:42:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.465 00:42:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.465 00:42:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.465 00:42:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:54.723 [2024-12-06 00:42:27.251576] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:54.723 [2024-12-06 00:42:27.251749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819155 ] 00:04:54.723 [2024-12-06 00:42:27.389935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.982 [2024-12-06 00:42:27.523375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.913 00:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.913 00:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:55.914 00:42:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.914 00:42:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.914 00:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:55.914 00:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.914 00:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.914 00:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.914 00:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.914 00:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.914 00:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.914 00:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.914 00:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.914 00:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:55.914 00:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.914 [2024-12-06 00:42:28.563248] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:55.914 [2024-12-06 00:42:28.563414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819329 ] 00:04:56.172 [2024-12-06 00:42:28.708474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.172 [2024-12-06 00:42:28.846075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.172 [2024-12-06 00:42:28.846252] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:56.172 [2024-12-06 00:42:28.846303] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:56.172 [2024-12-06 00:42:28.846326] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:56.430 00:42:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:56.430 00:42:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:56.430 00:42:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:56.430 00:42:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:56.430 00:42:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:56.430 00:42:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:56.430 00:42:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:56.430 00:42:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2819155 00:04:56.430 00:42:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2819155 ']' 00:04:56.430 00:42:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2819155 00:04:56.430 00:42:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:56.430 00:42:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.430 00:42:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2819155 00:04:56.687 00:42:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.687 00:42:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.687 00:42:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2819155' 00:04:56.687 killing process with pid 2819155 00:04:56.687 00:42:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2819155 00:04:56.687 00:42:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2819155 00:04:59.219 00:04:59.219 real 0m4.456s 00:04:59.219 user 0m4.875s 00:04:59.219 sys 0m0.790s 00:04:59.219 00:42:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.219 00:42:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.219 ************************************ 00:04:59.219 END TEST exit_on_failed_rpc_init 00:04:59.219 ************************************ 00:04:59.219 00:42:31 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:59.219 00:04:59.219 real 0m23.933s 00:04:59.219 user 0m23.059s 00:04:59.219 sys 0m2.649s 00:04:59.219 00:42:31 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.219 00:42:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.219 ************************************ 00:04:59.219 END TEST skip_rpc 00:04:59.219 ************************************ 00:04:59.219 00:42:31 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:59.219 00:42:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.219 00:42:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.219 00:42:31 -- common/autotest_common.sh@10 -- # set +x 00:04:59.219 ************************************ 00:04:59.219 START TEST rpc_client 00:04:59.219 ************************************ 00:04:59.219 00:42:31 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:59.219 * Looking for test storage... 00:04:59.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:59.219 00:42:31 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:59.219 00:42:31 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:59.219 00:42:31 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:59.219 00:42:31 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.219 00:42:31 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:59.219 00:42:31 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.219 00:42:31 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:59.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.219 --rc genhtml_branch_coverage=1 00:04:59.219 --rc genhtml_function_coverage=1 00:04:59.220 --rc genhtml_legend=1 00:04:59.220 --rc geninfo_all_blocks=1 00:04:59.220 --rc geninfo_unexecuted_blocks=1 00:04:59.220 00:04:59.220 ' 00:04:59.220 00:42:31 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:59.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.220 --rc genhtml_branch_coverage=1 00:04:59.220 --rc genhtml_function_coverage=1 00:04:59.220 --rc genhtml_legend=1 00:04:59.220 --rc geninfo_all_blocks=1 00:04:59.220 --rc geninfo_unexecuted_blocks=1 00:04:59.220 00:04:59.220 ' 00:04:59.220 00:42:31 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:59.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.220 --rc genhtml_branch_coverage=1 00:04:59.220 --rc genhtml_function_coverage=1 00:04:59.220 --rc genhtml_legend=1 00:04:59.220 --rc geninfo_all_blocks=1 00:04:59.220 --rc geninfo_unexecuted_blocks=1 00:04:59.220 00:04:59.220 ' 00:04:59.220 00:42:31 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:59.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.220 --rc genhtml_branch_coverage=1 00:04:59.220 --rc genhtml_function_coverage=1 00:04:59.220 --rc genhtml_legend=1 00:04:59.220 --rc geninfo_all_blocks=1 00:04:59.220 --rc geninfo_unexecuted_blocks=1 00:04:59.220 00:04:59.220 ' 00:04:59.220 00:42:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:59.220 OK 00:04:59.220 00:42:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:59.220 00:04:59.220 real 0m0.205s 00:04:59.220 user 0m0.126s 00:04:59.220 sys 0m0.086s 00:04:59.220 00:42:31 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.220 00:42:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:59.220 ************************************ 00:04:59.220 END TEST rpc_client 00:04:59.220 ************************************ 00:04:59.220 00:42:31 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:59.220 00:42:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.220 00:42:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.220 00:42:31 -- common/autotest_common.sh@10 -- # set +x 00:04:59.482 ************************************ 00:04:59.482 START TEST json_config 00:04:59.482 ************************************ 00:04:59.482 00:42:31 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:59.482 00:42:31 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:59.482 00:42:31 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:59.482 00:42:31 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:59.482 00:42:32 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:59.482 00:42:32 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.482 00:42:32 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.482 00:42:32 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.482 00:42:32 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.482 00:42:32 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.482 00:42:32 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.482 00:42:32 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.482 00:42:32 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.482 00:42:32 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.482 00:42:32 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.482 00:42:32 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.482 00:42:32 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:59.482 00:42:32 json_config -- scripts/common.sh@345 -- # : 1 00:04:59.482 00:42:32 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.482 00:42:32 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.482 00:42:32 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:59.482 00:42:32 json_config -- scripts/common.sh@353 -- # local d=1 00:04:59.482 00:42:32 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.482 00:42:32 json_config -- scripts/common.sh@355 -- # echo 1 00:04:59.482 00:42:32 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.482 00:42:32 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:59.482 00:42:32 json_config -- scripts/common.sh@353 -- # local d=2 00:04:59.482 00:42:32 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.482 00:42:32 json_config -- scripts/common.sh@355 -- # echo 2 00:04:59.482 00:42:32 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.482 00:42:32 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.482 00:42:32 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.482 00:42:32 json_config -- scripts/common.sh@368 -- # return 0 00:04:59.482 00:42:32 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.482 00:42:32 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:59.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.482 --rc genhtml_branch_coverage=1 00:04:59.482 --rc genhtml_function_coverage=1 00:04:59.482 --rc genhtml_legend=1 00:04:59.482 --rc geninfo_all_blocks=1 00:04:59.482 --rc geninfo_unexecuted_blocks=1 00:04:59.482 00:04:59.482 ' 00:04:59.482 00:42:32 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:59.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.482 --rc genhtml_branch_coverage=1 00:04:59.482 --rc genhtml_function_coverage=1 00:04:59.482 --rc genhtml_legend=1 00:04:59.482 --rc geninfo_all_blocks=1 00:04:59.482 --rc geninfo_unexecuted_blocks=1 00:04:59.482 00:04:59.482 ' 00:04:59.482 00:42:32 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:59.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.482 --rc genhtml_branch_coverage=1 00:04:59.482 --rc genhtml_function_coverage=1 00:04:59.482 --rc genhtml_legend=1 00:04:59.482 --rc geninfo_all_blocks=1 00:04:59.482 --rc geninfo_unexecuted_blocks=1 00:04:59.482 00:04:59.482 ' 00:04:59.482 00:42:32 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:59.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.482 --rc genhtml_branch_coverage=1 00:04:59.482 --rc genhtml_function_coverage=1 00:04:59.482 --rc genhtml_legend=1 00:04:59.482 --rc geninfo_all_blocks=1 00:04:59.482 --rc geninfo_unexecuted_blocks=1 00:04:59.482 00:04:59.482 ' 00:04:59.482 00:42:32 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:59.482 00:42:32 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:59.482 00:42:32 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.482 00:42:32 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.482 00:42:32 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.482 00:42:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.482 00:42:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.482 00:42:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.482 00:42:32 json_config -- paths/export.sh@5 -- # export PATH 00:04:59.482 00:42:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@51 -- # : 0 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:59.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:59.482 00:42:32 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:59.482 00:42:32 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:59.482 00:42:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:59.482 00:42:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:59.482 00:42:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:59.482 00:42:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:59.482 00:42:32 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:59.482 00:42:32 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:59.482 00:42:32 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:59.482 00:42:32 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:59.482 00:42:32 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:59.483 00:42:32 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:59.483 00:42:32 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:59.483 00:42:32 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:59.483 00:42:32 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:59.483 00:42:32 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:59.483 00:42:32 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:59.483 INFO: JSON configuration test init 00:04:59.483 00:42:32 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:59.483 00:42:32 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:59.483 00:42:32 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:59.483 00:42:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.483 00:42:32 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:59.483 00:42:32 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:59.483 00:42:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.483 00:42:32 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:59.483 00:42:32 json_config -- json_config/common.sh@9 -- # local app=target 00:04:59.483 00:42:32 json_config -- json_config/common.sh@10 -- # shift 00:04:59.483 00:42:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:59.483 00:42:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:59.483 00:42:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:59.483 00:42:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.483 00:42:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.483 00:42:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2819943 00:04:59.483 00:42:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:59.483 Waiting for target to run... 00:04:59.483 00:42:32 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:59.483 00:42:32 json_config -- json_config/common.sh@25 -- # waitforlisten 2819943 /var/tmp/spdk_tgt.sock 00:04:59.483 00:42:32 json_config -- common/autotest_common.sh@835 -- # '[' -z 2819943 ']' 00:04:59.483 00:42:32 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:59.483 00:42:32 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.483 00:42:32 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:59.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:59.483 00:42:32 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.483 00:42:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.483 [2024-12-06 00:42:32.178204] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:59.483 [2024-12-06 00:42:32.178350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819943 ] 00:05:00.417 [2024-12-06 00:42:32.782980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.417 [2024-12-06 00:42:32.913048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.676 00:42:33 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.676 00:42:33 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:00.676 00:42:33 json_config -- json_config/common.sh@26 -- # echo '' 00:05:00.676 00:05:00.676 00:42:33 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:00.676 00:42:33 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:00.676 00:42:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.676 00:42:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.676 00:42:33 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:00.676 00:42:33 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:00.676 00:42:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:00.676 00:42:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.676 00:42:33 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:00.676 00:42:33 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:00.676 00:42:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:04.859 00:42:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.859 00:42:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:04.859 00:42:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@54 -- # sort 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:04.859 00:42:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:04.859 00:42:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:04.859 00:42:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.859 00:42:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:04.859 00:42:37 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:04.859 00:42:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:05.118 MallocForNvmf0 00:05:05.118 00:42:37 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:05.118 00:42:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:05.376 MallocForNvmf1 00:05:05.376 00:42:37 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:05.376 00:42:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:05.633 [2024-12-06 00:42:38.171629] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:05.633 00:42:38 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:05.634 00:42:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:05.891 00:42:38 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:05.891 00:42:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:06.149 00:42:38 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:06.149 00:42:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:06.408 00:42:39 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:06.408 00:42:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:06.666 [2024-12-06 00:42:39.259520] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:06.666 00:42:39 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:06.666 00:42:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.666 00:42:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.666 00:42:39 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:06.666 00:42:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.666 00:42:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.666 00:42:39 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:06.666 00:42:39 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:06.666 00:42:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:06.924 MallocBdevForConfigChangeCheck 00:05:06.924 00:42:39 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:06.924 00:42:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.924 00:42:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.924 00:42:39 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:06.924 00:42:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.491 00:42:40 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:07.491 INFO: shutting down applications... 00:05:07.491 00:42:40 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:07.491 00:42:40 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:07.491 00:42:40 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:07.491 00:42:40 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:09.392 Calling clear_iscsi_subsystem 00:05:09.392 Calling clear_nvmf_subsystem 00:05:09.392 Calling clear_nbd_subsystem 00:05:09.392 Calling clear_ublk_subsystem 00:05:09.392 Calling clear_vhost_blk_subsystem 00:05:09.392 Calling clear_vhost_scsi_subsystem 00:05:09.392 Calling clear_bdev_subsystem 00:05:09.392 00:42:41 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:09.392 00:42:41 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:09.392 00:42:41 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:09.392 00:42:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.392 00:42:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:09.392 00:42:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:09.651 00:42:42 json_config -- json_config/json_config.sh@352 -- # break 00:05:09.651 00:42:42 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:09.651 00:42:42 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:09.651 00:42:42 json_config -- json_config/common.sh@31 -- # local app=target 00:05:09.651 00:42:42 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:09.651 00:42:42 json_config -- json_config/common.sh@35 -- # [[ -n 2819943 ]] 00:05:09.651 00:42:42 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2819943 00:05:09.651 00:42:42 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:09.651 00:42:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.651 00:42:42 json_config -- json_config/common.sh@41 -- # kill -0 2819943 00:05:09.651 00:42:42 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:10.218 00:42:42 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:10.218 00:42:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.218 00:42:42 json_config -- json_config/common.sh@41 -- # kill -0 2819943 00:05:10.218 00:42:42 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:10.477 00:42:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:10.477 00:42:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.477 00:42:43 json_config -- json_config/common.sh@41 -- # kill -0 2819943 00:05:10.477 00:42:43 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.046 00:42:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.046 00:42:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.046 00:42:43 json_config -- json_config/common.sh@41 -- # kill -0 2819943 00:05:11.046 00:42:43 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:11.046 00:42:43 json_config -- json_config/common.sh@43 -- # break 00:05:11.046 00:42:43 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:11.046 00:42:43 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:11.046 SPDK target shutdown done 00:05:11.046 00:42:43 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:11.046 INFO: relaunching applications... 00:05:11.046 00:42:43 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.046 00:42:43 json_config -- json_config/common.sh@9 -- # local app=target 00:05:11.046 00:42:43 json_config -- json_config/common.sh@10 -- # shift 00:05:11.046 00:42:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:11.046 00:42:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:11.046 00:42:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:11.046 00:42:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.046 00:42:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.046 00:42:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2821409 00:05:11.046 00:42:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.046 00:42:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:11.046 Waiting for target to run... 00:05:11.046 00:42:43 json_config -- json_config/common.sh@25 -- # waitforlisten 2821409 /var/tmp/spdk_tgt.sock 00:05:11.046 00:42:43 json_config -- common/autotest_common.sh@835 -- # '[' -z 2821409 ']' 00:05:11.046 00:42:43 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:11.046 00:42:43 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.046 00:42:43 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:11.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:11.046 00:42:43 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.046 00:42:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.305 [2024-12-06 00:42:43.770911] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:11.305 [2024-12-06 00:42:43.771047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2821409 ] 00:05:11.872 [2024-12-06 00:42:44.369073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.872 [2024-12-06 00:42:44.499646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.059 [2024-12-06 00:42:48.295936] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:16.059 [2024-12-06 00:42:48.328554] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:16.059 00:42:48 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.059 00:42:48 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:16.059 00:42:48 json_config -- json_config/common.sh@26 -- # echo '' 00:05:16.059 00:05:16.059 00:42:48 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:16.059 00:42:48 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:16.059 INFO: Checking if target configuration is the same... 00:05:16.059 00:42:48 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:16.059 00:42:48 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:16.059 00:42:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.059 + '[' 2 -ne 2 ']' 00:05:16.059 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:16.059 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:16.059 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:16.059 +++ basename /dev/fd/62 00:05:16.059 ++ mktemp /tmp/62.XXX 00:05:16.059 + tmp_file_1=/tmp/62.mKF 00:05:16.059 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:16.059 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:16.059 + tmp_file_2=/tmp/spdk_tgt_config.json.ZcW 00:05:16.059 + ret=0 00:05:16.059 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:16.059 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:16.346 + diff -u /tmp/62.mKF /tmp/spdk_tgt_config.json.ZcW 00:05:16.346 + echo 'INFO: JSON config files are the same' 00:05:16.346 INFO: JSON config files are the same 00:05:16.346 + rm /tmp/62.mKF /tmp/spdk_tgt_config.json.ZcW 00:05:16.346 + exit 0 00:05:16.346 00:42:48 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:16.346 00:42:48 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:16.346 INFO: changing configuration and checking if this can be detected... 00:05:16.346 00:42:48 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:16.346 00:42:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:16.628 00:42:49 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:16.628 00:42:49 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:16.628 00:42:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.628 + '[' 2 -ne 2 ']' 00:05:16.628 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:16.628 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:16.628 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:16.628 +++ basename /dev/fd/62 00:05:16.628 ++ mktemp /tmp/62.XXX 00:05:16.628 + tmp_file_1=/tmp/62.nQ7 00:05:16.628 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:16.628 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:16.628 + tmp_file_2=/tmp/spdk_tgt_config.json.hxH 00:05:16.628 + ret=0 00:05:16.628 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:16.886 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:16.886 + diff -u /tmp/62.nQ7 /tmp/spdk_tgt_config.json.hxH 00:05:16.886 + ret=1 00:05:16.886 + echo '=== Start of file: /tmp/62.nQ7 ===' 00:05:16.886 + cat /tmp/62.nQ7 00:05:16.886 + echo '=== End of file: /tmp/62.nQ7 ===' 00:05:16.886 + echo '' 00:05:16.886 + echo '=== Start of file: /tmp/spdk_tgt_config.json.hxH ===' 00:05:16.886 + cat /tmp/spdk_tgt_config.json.hxH 00:05:16.886 + echo '=== End of file: /tmp/spdk_tgt_config.json.hxH ===' 00:05:16.886 + echo '' 00:05:16.886 + rm /tmp/62.nQ7 /tmp/spdk_tgt_config.json.hxH 00:05:16.886 + exit 1 00:05:16.886 00:42:49 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:16.886 INFO: configuration change detected. 00:05:16.886 00:42:49 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:16.886 00:42:49 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:16.886 00:42:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.886 00:42:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.886 00:42:49 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:16.886 00:42:49 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:16.886 00:42:49 json_config -- json_config/json_config.sh@324 -- # [[ -n 2821409 ]] 00:05:16.886 00:42:49 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:16.886 00:42:49 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:16.886 00:42:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.886 00:42:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.886 00:42:49 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:16.886 00:42:49 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:16.886 00:42:49 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:16.886 00:42:49 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:16.886 00:42:49 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:16.886 00:42:49 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:16.886 00:42:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.886 00:42:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.886 00:42:49 json_config -- json_config/json_config.sh@330 -- # killprocess 2821409 00:05:16.886 00:42:49 json_config -- common/autotest_common.sh@954 -- # '[' -z 2821409 ']' 00:05:16.886 00:42:49 json_config -- common/autotest_common.sh@958 -- # kill -0 2821409 00:05:16.886 00:42:49 json_config -- common/autotest_common.sh@959 -- # uname 00:05:16.886 00:42:49 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.886 00:42:49 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2821409 00:05:17.144 00:42:49 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.144 00:42:49 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.144 00:42:49 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2821409' 00:05:17.144 killing process with pid 2821409 00:05:17.144 00:42:49 json_config -- common/autotest_common.sh@973 -- # kill 2821409 00:05:17.144 00:42:49 json_config -- common/autotest_common.sh@978 -- # wait 2821409 00:05:19.668 00:42:52 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.668 00:42:52 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:19.668 00:42:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.668 00:42:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.668 00:42:52 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:19.668 00:42:52 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:19.668 INFO: Success 00:05:19.668 00:05:19.668 real 0m20.098s 00:05:19.668 user 0m21.157s 00:05:19.668 sys 0m3.232s 00:05:19.668 00:42:52 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.668 00:42:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.668 ************************************ 00:05:19.668 END TEST json_config 00:05:19.668 ************************************ 00:05:19.668 00:42:52 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:19.668 00:42:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.668 00:42:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.668 00:42:52 -- common/autotest_common.sh@10 -- # set +x 00:05:19.668 ************************************ 00:05:19.668 START TEST json_config_extra_key 00:05:19.668 ************************************ 00:05:19.668 00:42:52 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:19.668 00:42:52 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:19.668 00:42:52 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:19.668 00:42:52 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:19.668 00:42:52 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:19.668 00:42:52 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.668 00:42:52 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:19.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.668 --rc genhtml_branch_coverage=1 00:05:19.668 --rc genhtml_function_coverage=1 00:05:19.668 --rc genhtml_legend=1 00:05:19.668 --rc geninfo_all_blocks=1 00:05:19.668 --rc geninfo_unexecuted_blocks=1 00:05:19.668 00:05:19.668 ' 00:05:19.668 00:42:52 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:19.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.668 --rc genhtml_branch_coverage=1 00:05:19.668 --rc genhtml_function_coverage=1 00:05:19.668 --rc genhtml_legend=1 00:05:19.668 --rc geninfo_all_blocks=1 00:05:19.668 --rc geninfo_unexecuted_blocks=1 00:05:19.668 00:05:19.668 ' 00:05:19.668 00:42:52 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:19.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.668 --rc genhtml_branch_coverage=1 00:05:19.668 --rc genhtml_function_coverage=1 00:05:19.668 --rc genhtml_legend=1 00:05:19.668 --rc geninfo_all_blocks=1 00:05:19.668 --rc geninfo_unexecuted_blocks=1 00:05:19.668 00:05:19.668 ' 00:05:19.668 00:42:52 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:19.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.668 --rc genhtml_branch_coverage=1 00:05:19.668 --rc genhtml_function_coverage=1 00:05:19.668 --rc genhtml_legend=1 00:05:19.668 --rc geninfo_all_blocks=1 00:05:19.668 --rc geninfo_unexecuted_blocks=1 00:05:19.668 00:05:19.668 ' 00:05:19.668 00:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:19.668 00:42:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:19.668 00:42:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.668 00:42:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.668 00:42:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.668 00:42:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.668 00:42:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.668 00:42:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.668 00:42:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.668 00:42:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.668 00:42:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.668 00:42:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.668 00:42:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:19.668 00:42:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:19.668 00:42:52 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.668 00:42:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.668 00:42:52 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.668 00:42:52 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.668 00:42:52 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.668 00:42:52 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.669 00:42:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.669 00:42:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.669 00:42:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.669 00:42:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:19.669 00:42:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.669 00:42:52 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:19.669 00:42:52 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:19.669 00:42:52 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:19.669 00:42:52 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.669 00:42:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.669 00:42:52 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.669 00:42:52 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:19.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:19.669 00:42:52 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:19.669 00:42:52 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:19.669 00:42:52 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:19.669 00:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:19.669 00:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:19.669 00:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:19.669 00:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:19.669 00:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:19.669 00:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:19.669 00:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:19.669 00:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:19.669 00:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:19.669 00:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.669 00:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:19.669 INFO: launching applications... 00:05:19.669 00:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:19.669 00:42:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:19.669 00:42:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:19.669 00:42:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:19.669 00:42:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:19.669 00:42:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:19.669 00:42:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.669 00:42:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.669 00:42:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2822596 00:05:19.669 00:42:52 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:19.669 00:42:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:19.669 Waiting for target to run... 00:05:19.669 00:42:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2822596 /var/tmp/spdk_tgt.sock 00:05:19.669 00:42:52 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2822596 ']' 00:05:19.669 00:42:52 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.669 00:42:52 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.669 00:42:52 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.669 00:42:52 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.669 00:42:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:19.669 [2024-12-06 00:42:52.315435] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:19.669 [2024-12-06 00:42:52.315593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2822596 ] 00:05:20.236 [2024-12-06 00:42:52.748005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.236 [2024-12-06 00:42:52.869890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.172 00:42:53 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.172 00:42:53 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:21.172 00:42:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:21.172 00:05:21.172 00:42:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:21.172 INFO: shutting down applications... 00:05:21.172 00:42:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:21.172 00:42:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:21.172 00:42:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:21.172 00:42:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2822596 ]] 00:05:21.172 00:42:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2822596 00:05:21.172 00:42:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:21.172 00:42:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.172 00:42:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822596 00:05:21.172 00:42:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.431 00:42:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.431 00:42:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.431 00:42:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822596 00:05:21.431 00:42:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.997 00:42:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.997 00:42:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.997 00:42:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822596 00:05:21.997 00:42:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:22.564 00:42:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:22.564 00:42:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.564 00:42:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822596 00:05:22.564 00:42:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:23.129 00:42:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:23.129 00:42:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.130 00:42:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822596 00:05:23.130 00:42:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:23.696 00:42:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:23.696 00:42:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.696 00:42:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822596 00:05:23.696 00:42:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:23.955 00:42:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:23.955 00:42:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.955 00:42:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822596 00:05:23.955 00:42:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:23.955 00:42:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:23.955 00:42:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:23.955 00:42:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:23.955 SPDK target shutdown done 00:05:23.955 00:42:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:23.955 Success 00:05:23.955 00:05:23.955 real 0m4.531s 00:05:23.955 user 0m4.175s 00:05:23.955 sys 0m0.660s 00:05:23.955 00:42:56 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.955 00:42:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:23.955 ************************************ 00:05:23.955 END TEST json_config_extra_key 00:05:23.955 ************************************ 00:05:23.955 00:42:56 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:23.955 00:42:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.955 00:42:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.955 00:42:56 -- common/autotest_common.sh@10 -- # set +x 00:05:23.955 ************************************ 00:05:23.955 START TEST alias_rpc 00:05:23.955 ************************************ 00:05:23.955 00:42:56 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:24.214 * Looking for test storage... 00:05:24.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:24.214 00:42:56 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.214 00:42:56 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.214 00:42:56 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.214 00:42:56 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.214 00:42:56 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:24.214 00:42:56 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.214 00:42:56 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.214 --rc genhtml_branch_coverage=1 00:05:24.214 --rc genhtml_function_coverage=1 00:05:24.214 --rc genhtml_legend=1 00:05:24.214 --rc geninfo_all_blocks=1 00:05:24.214 --rc geninfo_unexecuted_blocks=1 00:05:24.214 00:05:24.214 ' 00:05:24.214 00:42:56 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.214 --rc genhtml_branch_coverage=1 00:05:24.214 --rc genhtml_function_coverage=1 00:05:24.214 --rc genhtml_legend=1 00:05:24.214 --rc geninfo_all_blocks=1 00:05:24.214 --rc geninfo_unexecuted_blocks=1 00:05:24.214 00:05:24.214 ' 00:05:24.214 00:42:56 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.214 --rc genhtml_branch_coverage=1 00:05:24.214 --rc genhtml_function_coverage=1 00:05:24.214 --rc genhtml_legend=1 00:05:24.214 --rc geninfo_all_blocks=1 00:05:24.214 --rc geninfo_unexecuted_blocks=1 00:05:24.214 00:05:24.214 ' 00:05:24.214 00:42:56 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.214 --rc genhtml_branch_coverage=1 00:05:24.214 --rc genhtml_function_coverage=1 00:05:24.214 --rc genhtml_legend=1 00:05:24.214 --rc geninfo_all_blocks=1 00:05:24.214 --rc geninfo_unexecuted_blocks=1 00:05:24.214 00:05:24.214 ' 00:05:24.214 00:42:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:24.214 00:42:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2823191 00:05:24.214 00:42:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.214 00:42:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2823191 00:05:24.214 00:42:56 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2823191 ']' 00:05:24.214 00:42:56 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.214 00:42:56 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.214 00:42:56 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.214 00:42:56 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.214 00:42:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.214 [2024-12-06 00:42:56.889056] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:24.214 [2024-12-06 00:42:56.889234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2823191 ] 00:05:24.473 [2024-12-06 00:42:57.032169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.473 [2024-12-06 00:42:57.170333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.407 00:42:58 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.407 00:42:58 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:25.407 00:42:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:25.974 00:42:58 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2823191 00:05:25.974 00:42:58 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2823191 ']' 00:05:25.974 00:42:58 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2823191 00:05:25.974 00:42:58 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:25.974 00:42:58 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.974 00:42:58 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2823191 00:05:25.974 00:42:58 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.974 00:42:58 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.974 00:42:58 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2823191' 00:05:25.974 killing process with pid 2823191 00:05:25.974 00:42:58 alias_rpc -- common/autotest_common.sh@973 -- # kill 2823191 00:05:25.975 00:42:58 alias_rpc -- common/autotest_common.sh@978 -- # wait 2823191 00:05:28.504 00:05:28.504 real 0m4.222s 00:05:28.504 user 0m4.323s 00:05:28.504 sys 0m0.696s 00:05:28.504 00:43:00 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.504 00:43:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.504 ************************************ 00:05:28.504 END TEST alias_rpc 00:05:28.504 ************************************ 00:05:28.504 00:43:00 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:28.504 00:43:00 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:28.504 00:43:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.504 00:43:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.504 00:43:00 -- common/autotest_common.sh@10 -- # set +x 00:05:28.504 ************************************ 00:05:28.504 START TEST spdkcli_tcp 00:05:28.504 ************************************ 00:05:28.504 00:43:00 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:28.504 * Looking for test storage... 00:05:28.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:28.504 00:43:00 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:28.504 00:43:00 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:28.504 00:43:00 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:28.504 00:43:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.504 00:43:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:28.505 00:43:01 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.505 00:43:01 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.505 00:43:01 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.505 00:43:01 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:28.505 00:43:01 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.505 00:43:01 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:28.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.505 --rc genhtml_branch_coverage=1 00:05:28.505 --rc genhtml_function_coverage=1 00:05:28.505 --rc genhtml_legend=1 00:05:28.505 --rc geninfo_all_blocks=1 00:05:28.505 --rc geninfo_unexecuted_blocks=1 00:05:28.505 00:05:28.505 ' 00:05:28.505 00:43:01 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:28.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.505 --rc genhtml_branch_coverage=1 00:05:28.505 --rc genhtml_function_coverage=1 00:05:28.505 --rc genhtml_legend=1 00:05:28.505 --rc geninfo_all_blocks=1 00:05:28.505 --rc geninfo_unexecuted_blocks=1 00:05:28.505 00:05:28.505 ' 00:05:28.505 00:43:01 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:28.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.505 --rc genhtml_branch_coverage=1 00:05:28.505 --rc genhtml_function_coverage=1 00:05:28.505 --rc genhtml_legend=1 00:05:28.505 --rc geninfo_all_blocks=1 00:05:28.505 --rc geninfo_unexecuted_blocks=1 00:05:28.505 00:05:28.505 ' 00:05:28.505 00:43:01 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:28.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.505 --rc genhtml_branch_coverage=1 00:05:28.505 --rc genhtml_function_coverage=1 00:05:28.505 --rc genhtml_legend=1 00:05:28.505 --rc geninfo_all_blocks=1 00:05:28.505 --rc geninfo_unexecuted_blocks=1 00:05:28.505 00:05:28.505 ' 00:05:28.505 00:43:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:28.505 00:43:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:28.505 00:43:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:28.505 00:43:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:28.505 00:43:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:28.505 00:43:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:28.505 00:43:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:28.505 00:43:01 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.505 00:43:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.505 00:43:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2823665 00:05:28.505 00:43:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:28.505 00:43:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2823665 00:05:28.505 00:43:01 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2823665 ']' 00:05:28.505 00:43:01 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.505 00:43:01 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.505 00:43:01 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.505 00:43:01 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.505 00:43:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.505 [2024-12-06 00:43:01.173409] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:28.505 [2024-12-06 00:43:01.173541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2823665 ] 00:05:28.762 [2024-12-06 00:43:01.324648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.762 [2024-12-06 00:43:01.466706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.762 [2024-12-06 00:43:01.466709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.136 00:43:02 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.137 00:43:02 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:30.137 00:43:02 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2823929 00:05:30.137 00:43:02 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:30.137 00:43:02 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:30.137 [ 00:05:30.137 "bdev_malloc_delete", 00:05:30.137 "bdev_malloc_create", 00:05:30.137 "bdev_null_resize", 00:05:30.137 "bdev_null_delete", 00:05:30.137 "bdev_null_create", 00:05:30.137 "bdev_nvme_cuse_unregister", 00:05:30.137 "bdev_nvme_cuse_register", 00:05:30.137 "bdev_opal_new_user", 00:05:30.137 "bdev_opal_set_lock_state", 00:05:30.137 "bdev_opal_delete", 00:05:30.137 "bdev_opal_get_info", 00:05:30.137 "bdev_opal_create", 00:05:30.137 "bdev_nvme_opal_revert", 00:05:30.137 "bdev_nvme_opal_init", 00:05:30.137 "bdev_nvme_send_cmd", 00:05:30.137 "bdev_nvme_set_keys", 00:05:30.137 "bdev_nvme_get_path_iostat", 00:05:30.137 "bdev_nvme_get_mdns_discovery_info", 00:05:30.137 "bdev_nvme_stop_mdns_discovery", 00:05:30.137 "bdev_nvme_start_mdns_discovery", 00:05:30.137 "bdev_nvme_set_multipath_policy", 00:05:30.137 "bdev_nvme_set_preferred_path", 00:05:30.137 "bdev_nvme_get_io_paths", 00:05:30.137 "bdev_nvme_remove_error_injection", 00:05:30.137 "bdev_nvme_add_error_injection", 00:05:30.137 "bdev_nvme_get_discovery_info", 00:05:30.137 "bdev_nvme_stop_discovery", 00:05:30.137 "bdev_nvme_start_discovery", 00:05:30.137 "bdev_nvme_get_controller_health_info", 00:05:30.137 "bdev_nvme_disable_controller", 00:05:30.137 "bdev_nvme_enable_controller", 00:05:30.137 "bdev_nvme_reset_controller", 00:05:30.137 "bdev_nvme_get_transport_statistics", 00:05:30.137 "bdev_nvme_apply_firmware", 00:05:30.137 "bdev_nvme_detach_controller", 00:05:30.137 "bdev_nvme_get_controllers", 00:05:30.137 "bdev_nvme_attach_controller", 00:05:30.137 "bdev_nvme_set_hotplug", 00:05:30.137 "bdev_nvme_set_options", 00:05:30.137 "bdev_passthru_delete", 00:05:30.137 "bdev_passthru_create", 00:05:30.137 "bdev_lvol_set_parent_bdev", 00:05:30.137 "bdev_lvol_set_parent", 00:05:30.137 "bdev_lvol_check_shallow_copy", 00:05:30.137 "bdev_lvol_start_shallow_copy", 00:05:30.137 "bdev_lvol_grow_lvstore", 00:05:30.137 "bdev_lvol_get_lvols", 00:05:30.137 "bdev_lvol_get_lvstores", 00:05:30.137 "bdev_lvol_delete", 00:05:30.137 "bdev_lvol_set_read_only", 00:05:30.137 "bdev_lvol_resize", 00:05:30.137 "bdev_lvol_decouple_parent", 00:05:30.137 "bdev_lvol_inflate", 00:05:30.137 "bdev_lvol_rename", 00:05:30.137 "bdev_lvol_clone_bdev", 00:05:30.137 "bdev_lvol_clone", 00:05:30.137 "bdev_lvol_snapshot", 00:05:30.137 "bdev_lvol_create", 00:05:30.137 "bdev_lvol_delete_lvstore", 00:05:30.137 "bdev_lvol_rename_lvstore", 00:05:30.137 "bdev_lvol_create_lvstore", 00:05:30.137 "bdev_raid_set_options", 00:05:30.137 "bdev_raid_remove_base_bdev", 00:05:30.137 "bdev_raid_add_base_bdev", 00:05:30.137 "bdev_raid_delete", 00:05:30.137 "bdev_raid_create", 00:05:30.137 "bdev_raid_get_bdevs", 00:05:30.137 "bdev_error_inject_error", 00:05:30.137 "bdev_error_delete", 00:05:30.137 "bdev_error_create", 00:05:30.137 "bdev_split_delete", 00:05:30.137 "bdev_split_create", 00:05:30.137 "bdev_delay_delete", 00:05:30.137 "bdev_delay_create", 00:05:30.137 "bdev_delay_update_latency", 00:05:30.137 "bdev_zone_block_delete", 00:05:30.137 "bdev_zone_block_create", 00:05:30.137 "blobfs_create", 00:05:30.137 "blobfs_detect", 00:05:30.137 "blobfs_set_cache_size", 00:05:30.137 "bdev_aio_delete", 00:05:30.137 "bdev_aio_rescan", 00:05:30.137 "bdev_aio_create", 00:05:30.137 "bdev_ftl_set_property", 00:05:30.137 "bdev_ftl_get_properties", 00:05:30.137 "bdev_ftl_get_stats", 00:05:30.137 "bdev_ftl_unmap", 00:05:30.137 "bdev_ftl_unload", 00:05:30.137 "bdev_ftl_delete", 00:05:30.137 "bdev_ftl_load", 00:05:30.137 "bdev_ftl_create", 00:05:30.137 "bdev_virtio_attach_controller", 00:05:30.137 "bdev_virtio_scsi_get_devices", 00:05:30.137 "bdev_virtio_detach_controller", 00:05:30.137 "bdev_virtio_blk_set_hotplug", 00:05:30.137 "bdev_iscsi_delete", 00:05:30.137 "bdev_iscsi_create", 00:05:30.137 "bdev_iscsi_set_options", 00:05:30.137 "accel_error_inject_error", 00:05:30.137 "ioat_scan_accel_module", 00:05:30.137 "dsa_scan_accel_module", 00:05:30.137 "iaa_scan_accel_module", 00:05:30.137 "keyring_file_remove_key", 00:05:30.137 "keyring_file_add_key", 00:05:30.137 "keyring_linux_set_options", 00:05:30.137 "fsdev_aio_delete", 00:05:30.137 "fsdev_aio_create", 00:05:30.137 "iscsi_get_histogram", 00:05:30.137 "iscsi_enable_histogram", 00:05:30.137 "iscsi_set_options", 00:05:30.137 "iscsi_get_auth_groups", 00:05:30.137 "iscsi_auth_group_remove_secret", 00:05:30.137 "iscsi_auth_group_add_secret", 00:05:30.137 "iscsi_delete_auth_group", 00:05:30.137 "iscsi_create_auth_group", 00:05:30.137 "iscsi_set_discovery_auth", 00:05:30.137 "iscsi_get_options", 00:05:30.137 "iscsi_target_node_request_logout", 00:05:30.137 "iscsi_target_node_set_redirect", 00:05:30.137 "iscsi_target_node_set_auth", 00:05:30.137 "iscsi_target_node_add_lun", 00:05:30.137 "iscsi_get_stats", 00:05:30.137 "iscsi_get_connections", 00:05:30.137 "iscsi_portal_group_set_auth", 00:05:30.137 "iscsi_start_portal_group", 00:05:30.137 "iscsi_delete_portal_group", 00:05:30.137 "iscsi_create_portal_group", 00:05:30.137 "iscsi_get_portal_groups", 00:05:30.137 "iscsi_delete_target_node", 00:05:30.137 "iscsi_target_node_remove_pg_ig_maps", 00:05:30.137 "iscsi_target_node_add_pg_ig_maps", 00:05:30.137 "iscsi_create_target_node", 00:05:30.137 "iscsi_get_target_nodes", 00:05:30.137 "iscsi_delete_initiator_group", 00:05:30.137 "iscsi_initiator_group_remove_initiators", 00:05:30.137 "iscsi_initiator_group_add_initiators", 00:05:30.137 "iscsi_create_initiator_group", 00:05:30.137 "iscsi_get_initiator_groups", 00:05:30.137 "nvmf_set_crdt", 00:05:30.137 "nvmf_set_config", 00:05:30.137 "nvmf_set_max_subsystems", 00:05:30.137 "nvmf_stop_mdns_prr", 00:05:30.137 "nvmf_publish_mdns_prr", 00:05:30.137 "nvmf_subsystem_get_listeners", 00:05:30.137 "nvmf_subsystem_get_qpairs", 00:05:30.137 "nvmf_subsystem_get_controllers", 00:05:30.137 "nvmf_get_stats", 00:05:30.137 "nvmf_get_transports", 00:05:30.137 "nvmf_create_transport", 00:05:30.137 "nvmf_get_targets", 00:05:30.137 "nvmf_delete_target", 00:05:30.137 "nvmf_create_target", 00:05:30.137 "nvmf_subsystem_allow_any_host", 00:05:30.137 "nvmf_subsystem_set_keys", 00:05:30.137 "nvmf_subsystem_remove_host", 00:05:30.137 "nvmf_subsystem_add_host", 00:05:30.137 "nvmf_ns_remove_host", 00:05:30.137 "nvmf_ns_add_host", 00:05:30.137 "nvmf_subsystem_remove_ns", 00:05:30.137 "nvmf_subsystem_set_ns_ana_group", 00:05:30.137 "nvmf_subsystem_add_ns", 00:05:30.137 "nvmf_subsystem_listener_set_ana_state", 00:05:30.137 "nvmf_discovery_get_referrals", 00:05:30.137 "nvmf_discovery_remove_referral", 00:05:30.137 "nvmf_discovery_add_referral", 00:05:30.137 "nvmf_subsystem_remove_listener", 00:05:30.137 "nvmf_subsystem_add_listener", 00:05:30.137 "nvmf_delete_subsystem", 00:05:30.137 "nvmf_create_subsystem", 00:05:30.137 "nvmf_get_subsystems", 00:05:30.137 "env_dpdk_get_mem_stats", 00:05:30.137 "nbd_get_disks", 00:05:30.137 "nbd_stop_disk", 00:05:30.137 "nbd_start_disk", 00:05:30.137 "ublk_recover_disk", 00:05:30.137 "ublk_get_disks", 00:05:30.137 "ublk_stop_disk", 00:05:30.137 "ublk_start_disk", 00:05:30.137 "ublk_destroy_target", 00:05:30.138 "ublk_create_target", 00:05:30.138 "virtio_blk_create_transport", 00:05:30.138 "virtio_blk_get_transports", 00:05:30.138 "vhost_controller_set_coalescing", 00:05:30.138 "vhost_get_controllers", 00:05:30.138 "vhost_delete_controller", 00:05:30.138 "vhost_create_blk_controller", 00:05:30.138 "vhost_scsi_controller_remove_target", 00:05:30.138 "vhost_scsi_controller_add_target", 00:05:30.138 "vhost_start_scsi_controller", 00:05:30.138 "vhost_create_scsi_controller", 00:05:30.138 "thread_set_cpumask", 00:05:30.138 "scheduler_set_options", 00:05:30.138 "framework_get_governor", 00:05:30.138 "framework_get_scheduler", 00:05:30.138 "framework_set_scheduler", 00:05:30.138 "framework_get_reactors", 00:05:30.138 "thread_get_io_channels", 00:05:30.138 "thread_get_pollers", 00:05:30.138 "thread_get_stats", 00:05:30.138 "framework_monitor_context_switch", 00:05:30.138 "spdk_kill_instance", 00:05:30.138 "log_enable_timestamps", 00:05:30.138 "log_get_flags", 00:05:30.138 "log_clear_flag", 00:05:30.138 "log_set_flag", 00:05:30.138 "log_get_level", 00:05:30.138 "log_set_level", 00:05:30.138 "log_get_print_level", 00:05:30.138 "log_set_print_level", 00:05:30.138 "framework_enable_cpumask_locks", 00:05:30.138 "framework_disable_cpumask_locks", 00:05:30.138 "framework_wait_init", 00:05:30.138 "framework_start_init", 00:05:30.138 "scsi_get_devices", 00:05:30.138 "bdev_get_histogram", 00:05:30.138 "bdev_enable_histogram", 00:05:30.138 "bdev_set_qos_limit", 00:05:30.138 "bdev_set_qd_sampling_period", 00:05:30.138 "bdev_get_bdevs", 00:05:30.138 "bdev_reset_iostat", 00:05:30.138 "bdev_get_iostat", 00:05:30.138 "bdev_examine", 00:05:30.138 "bdev_wait_for_examine", 00:05:30.138 "bdev_set_options", 00:05:30.138 "accel_get_stats", 00:05:30.138 "accel_set_options", 00:05:30.138 "accel_set_driver", 00:05:30.138 "accel_crypto_key_destroy", 00:05:30.138 "accel_crypto_keys_get", 00:05:30.138 "accel_crypto_key_create", 00:05:30.138 "accel_assign_opc", 00:05:30.138 "accel_get_module_info", 00:05:30.138 "accel_get_opc_assignments", 00:05:30.138 "vmd_rescan", 00:05:30.138 "vmd_remove_device", 00:05:30.138 "vmd_enable", 00:05:30.138 "sock_get_default_impl", 00:05:30.138 "sock_set_default_impl", 00:05:30.138 "sock_impl_set_options", 00:05:30.138 "sock_impl_get_options", 00:05:30.138 "iobuf_get_stats", 00:05:30.138 "iobuf_set_options", 00:05:30.138 "keyring_get_keys", 00:05:30.138 "framework_get_pci_devices", 00:05:30.138 "framework_get_config", 00:05:30.138 "framework_get_subsystems", 00:05:30.138 "fsdev_set_opts", 00:05:30.138 "fsdev_get_opts", 00:05:30.138 "trace_get_info", 00:05:30.138 "trace_get_tpoint_group_mask", 00:05:30.138 "trace_disable_tpoint_group", 00:05:30.138 "trace_enable_tpoint_group", 00:05:30.138 "trace_clear_tpoint_mask", 00:05:30.138 "trace_set_tpoint_mask", 00:05:30.138 "notify_get_notifications", 00:05:30.138 "notify_get_types", 00:05:30.138 "spdk_get_version", 00:05:30.138 "rpc_get_methods" 00:05:30.138 ] 00:05:30.138 00:43:02 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:30.138 00:43:02 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:30.138 00:43:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.138 00:43:02 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:30.138 00:43:02 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2823665 00:05:30.138 00:43:02 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2823665 ']' 00:05:30.138 00:43:02 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2823665 00:05:30.138 00:43:02 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:30.138 00:43:02 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.138 00:43:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2823665 00:05:30.138 00:43:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.138 00:43:02 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.138 00:43:02 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2823665' 00:05:30.138 killing process with pid 2823665 00:05:30.138 00:43:02 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2823665 00:05:30.138 00:43:02 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2823665 00:05:32.668 00:05:32.668 real 0m4.233s 00:05:32.668 user 0m7.749s 00:05:32.668 sys 0m0.732s 00:05:32.668 00:43:05 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.668 00:43:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:32.668 ************************************ 00:05:32.668 END TEST spdkcli_tcp 00:05:32.668 ************************************ 00:05:32.668 00:43:05 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:32.668 00:43:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.668 00:43:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.668 00:43:05 -- common/autotest_common.sh@10 -- # set +x 00:05:32.668 ************************************ 00:05:32.668 START TEST dpdk_mem_utility 00:05:32.668 ************************************ 00:05:32.668 00:43:05 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:32.668 * Looking for test storage... 00:05:32.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:32.668 00:43:05 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:32.668 00:43:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:32.668 00:43:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:32.668 00:43:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:32.668 00:43:05 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.669 00:43:05 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.669 00:43:05 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.669 00:43:05 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:32.669 00:43:05 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.669 00:43:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:32.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.669 --rc genhtml_branch_coverage=1 00:05:32.669 --rc genhtml_function_coverage=1 00:05:32.669 --rc genhtml_legend=1 00:05:32.669 --rc geninfo_all_blocks=1 00:05:32.669 --rc geninfo_unexecuted_blocks=1 00:05:32.669 00:05:32.669 ' 00:05:32.669 00:43:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:32.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.669 --rc genhtml_branch_coverage=1 00:05:32.669 --rc genhtml_function_coverage=1 00:05:32.669 --rc genhtml_legend=1 00:05:32.669 --rc geninfo_all_blocks=1 00:05:32.669 --rc geninfo_unexecuted_blocks=1 00:05:32.669 00:05:32.669 ' 00:05:32.669 00:43:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:32.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.669 --rc genhtml_branch_coverage=1 00:05:32.669 --rc genhtml_function_coverage=1 00:05:32.669 --rc genhtml_legend=1 00:05:32.669 --rc geninfo_all_blocks=1 00:05:32.669 --rc geninfo_unexecuted_blocks=1 00:05:32.669 00:05:32.669 ' 00:05:32.669 00:43:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:32.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.669 --rc genhtml_branch_coverage=1 00:05:32.669 --rc genhtml_function_coverage=1 00:05:32.669 --rc genhtml_legend=1 00:05:32.669 --rc geninfo_all_blocks=1 00:05:32.669 --rc geninfo_unexecuted_blocks=1 00:05:32.669 00:05:32.669 ' 00:05:32.669 00:43:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:32.669 00:43:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2824273 00:05:32.669 00:43:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.669 00:43:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2824273 00:05:32.669 00:43:05 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2824273 ']' 00:05:32.669 00:43:05 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.669 00:43:05 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.669 00:43:05 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.669 00:43:05 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.669 00:43:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:32.926 [2024-12-06 00:43:05.456432] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:32.926 [2024-12-06 00:43:05.456567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2824273 ] 00:05:32.926 [2024-12-06 00:43:05.598899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.184 [2024-12-06 00:43:05.737064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.118 00:43:06 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.118 00:43:06 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:34.118 00:43:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:34.118 00:43:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:34.118 00:43:06 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.118 00:43:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:34.118 { 00:05:34.118 "filename": "/tmp/spdk_mem_dump.txt" 00:05:34.118 } 00:05:34.118 00:43:06 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.118 00:43:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:34.118 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:34.118 1 heaps totaling size 824.000000 MiB 00:05:34.118 size: 824.000000 MiB heap id: 0 00:05:34.118 end heaps---------- 00:05:34.118 9 mempools totaling size 603.782043 MiB 00:05:34.118 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:34.118 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:34.118 size: 100.555481 MiB name: bdev_io_2824273 00:05:34.118 size: 50.003479 MiB name: msgpool_2824273 00:05:34.118 size: 36.509338 MiB name: fsdev_io_2824273 00:05:34.118 size: 21.763794 MiB name: PDU_Pool 00:05:34.118 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:34.118 size: 4.133484 MiB name: evtpool_2824273 00:05:34.118 size: 0.026123 MiB name: Session_Pool 00:05:34.118 end mempools------- 00:05:34.118 6 memzones totaling size 4.142822 MiB 00:05:34.118 size: 1.000366 MiB name: RG_ring_0_2824273 00:05:34.118 size: 1.000366 MiB name: RG_ring_1_2824273 00:05:34.118 size: 1.000366 MiB name: RG_ring_4_2824273 00:05:34.118 size: 1.000366 MiB name: RG_ring_5_2824273 00:05:34.118 size: 0.125366 MiB name: RG_ring_2_2824273 00:05:34.118 size: 0.015991 MiB name: RG_ring_3_2824273 00:05:34.118 end memzones------- 00:05:34.118 00:43:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:34.118 heap id: 0 total size: 824.000000 MiB number of busy elements: 44 number of free elements: 19 00:05:34.118 list of free elements. size: 16.847595 MiB 00:05:34.118 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:34.118 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:34.118 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:34.118 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:34.118 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:34.118 element at address: 0x200019a00000 with size: 0.999329 MiB 00:05:34.118 element at address: 0x200000400000 with size: 0.998108 MiB 00:05:34.118 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:34.118 element at address: 0x200019200000 with size: 0.959900 MiB 00:05:34.118 element at address: 0x200019d00040 with size: 0.937256 MiB 00:05:34.118 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:34.118 element at address: 0x20001b400000 with size: 0.583191 MiB 00:05:34.118 element at address: 0x200000c00000 with size: 0.495300 MiB 00:05:34.118 element at address: 0x200019600000 with size: 0.491150 MiB 00:05:34.118 element at address: 0x200019e00000 with size: 0.485657 MiB 00:05:34.118 element at address: 0x200012c00000 with size: 0.436157 MiB 00:05:34.118 element at address: 0x200028800000 with size: 0.411072 MiB 00:05:34.118 element at address: 0x200000800000 with size: 0.355286 MiB 00:05:34.118 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:05:34.118 list of standard malloc elements. size: 199.221497 MiB 00:05:34.118 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:34.118 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:34.118 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:34.118 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:34.118 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:34.118 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:34.118 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:34.118 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:34.118 element at address: 0x200012bff040 with size: 0.000427 MiB 00:05:34.118 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:05:34.118 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:34.118 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:34.118 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:34.118 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:34.118 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:05:34.118 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:34.118 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:34.118 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:34.118 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:34.118 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:34.118 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:34.118 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:34.118 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:34.118 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:05:34.118 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:05:34.118 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:05:34.118 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:05:34.118 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:05:34.118 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:05:34.118 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:34.118 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:34.118 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:34.118 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:34.118 element at address: 0x200012bff200 with size: 0.000244 MiB 00:05:34.118 element at address: 0x200012bff300 with size: 0.000244 MiB 00:05:34.118 element at address: 0x200012bff400 with size: 0.000244 MiB 00:05:34.118 element at address: 0x200012bff500 with size: 0.000244 MiB 00:05:34.118 element at address: 0x200012bff600 with size: 0.000244 MiB 00:05:34.118 element at address: 0x200012bff700 with size: 0.000244 MiB 00:05:34.118 element at address: 0x200012bff800 with size: 0.000244 MiB 00:05:34.118 element at address: 0x200012bff900 with size: 0.000244 MiB 00:05:34.118 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:34.118 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:34.118 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:34.118 list of memzone associated elements. size: 607.930908 MiB 00:05:34.118 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:34.118 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:34.118 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:34.118 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:34.118 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:34.118 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2824273_0 00:05:34.118 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:34.118 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2824273_0 00:05:34.118 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:34.118 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2824273_0 00:05:34.118 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:34.118 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:34.118 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:34.118 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:34.118 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:34.118 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2824273_0 00:05:34.118 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:34.119 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2824273 00:05:34.119 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:34.119 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2824273 00:05:34.119 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:34.119 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:34.119 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:34.119 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:34.119 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:34.119 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:34.119 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:34.119 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:34.119 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:34.119 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2824273 00:05:34.119 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:34.119 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2824273 00:05:34.119 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:34.119 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2824273 00:05:34.119 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:34.119 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2824273 00:05:34.119 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:34.119 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2824273 00:05:34.119 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:34.119 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2824273 00:05:34.119 element at address: 0x20001967dbc0 with size: 0.500549 MiB 00:05:34.119 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:34.119 element at address: 0x200012c6fa80 with size: 0.500549 MiB 00:05:34.119 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:34.119 element at address: 0x200019e7c540 with size: 0.250549 MiB 00:05:34.119 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:34.119 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:34.119 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2824273 00:05:34.119 element at address: 0x20000085f180 with size: 0.125549 MiB 00:05:34.119 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2824273 00:05:34.119 element at address: 0x2000192f5bc0 with size: 0.031799 MiB 00:05:34.119 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:34.119 element at address: 0x2000288693c0 with size: 0.023804 MiB 00:05:34.119 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:34.119 element at address: 0x20000085af40 with size: 0.016174 MiB 00:05:34.119 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2824273 00:05:34.119 element at address: 0x20002886f540 with size: 0.002502 MiB 00:05:34.119 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:34.119 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:05:34.119 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2824273 00:05:34.119 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:34.119 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2824273 00:05:34.119 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:34.119 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2824273 00:05:34.119 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:05:34.119 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:34.119 00:43:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:34.119 00:43:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2824273 00:05:34.119 00:43:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2824273 ']' 00:05:34.119 00:43:06 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2824273 00:05:34.119 00:43:06 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:34.119 00:43:06 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.119 00:43:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2824273 00:05:34.378 00:43:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.378 00:43:06 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.378 00:43:06 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2824273' 00:05:34.378 killing process with pid 2824273 00:05:34.378 00:43:06 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2824273 00:05:34.378 00:43:06 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2824273 00:05:36.909 00:05:36.909 real 0m4.108s 00:05:36.909 user 0m4.076s 00:05:36.909 sys 0m0.682s 00:05:36.909 00:43:09 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.909 00:43:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:36.909 ************************************ 00:05:36.909 END TEST dpdk_mem_utility 00:05:36.909 ************************************ 00:05:36.909 00:43:09 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:36.909 00:43:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.909 00:43:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.909 00:43:09 -- common/autotest_common.sh@10 -- # set +x 00:05:36.909 ************************************ 00:05:36.909 START TEST event 00:05:36.909 ************************************ 00:05:36.909 00:43:09 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:36.909 * Looking for test storage... 00:05:36.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:36.909 00:43:09 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:36.909 00:43:09 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:36.909 00:43:09 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:36.909 00:43:09 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.909 00:43:09 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.909 00:43:09 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.909 00:43:09 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.909 00:43:09 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.909 00:43:09 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.909 00:43:09 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.909 00:43:09 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.909 00:43:09 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.909 00:43:09 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.909 00:43:09 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.909 00:43:09 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.909 00:43:09 event -- scripts/common.sh@344 -- # case "$op" in 00:05:36.909 00:43:09 event -- scripts/common.sh@345 -- # : 1 00:05:36.909 00:43:09 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.909 00:43:09 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.909 00:43:09 event -- scripts/common.sh@365 -- # decimal 1 00:05:36.909 00:43:09 event -- scripts/common.sh@353 -- # local d=1 00:05:36.909 00:43:09 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.909 00:43:09 event -- scripts/common.sh@355 -- # echo 1 00:05:36.909 00:43:09 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.909 00:43:09 event -- scripts/common.sh@366 -- # decimal 2 00:05:36.909 00:43:09 event -- scripts/common.sh@353 -- # local d=2 00:05:36.909 00:43:09 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.909 00:43:09 event -- scripts/common.sh@355 -- # echo 2 00:05:36.909 00:43:09 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.909 00:43:09 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.909 00:43:09 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.909 00:43:09 event -- scripts/common.sh@368 -- # return 0 00:05:36.909 00:43:09 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.909 00:43:09 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.909 --rc genhtml_branch_coverage=1 00:05:36.909 --rc genhtml_function_coverage=1 00:05:36.909 --rc genhtml_legend=1 00:05:36.909 --rc geninfo_all_blocks=1 00:05:36.909 --rc geninfo_unexecuted_blocks=1 00:05:36.909 00:05:36.909 ' 00:05:36.909 00:43:09 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.909 --rc genhtml_branch_coverage=1 00:05:36.909 --rc genhtml_function_coverage=1 00:05:36.909 --rc genhtml_legend=1 00:05:36.909 --rc geninfo_all_blocks=1 00:05:36.909 --rc geninfo_unexecuted_blocks=1 00:05:36.909 00:05:36.909 ' 00:05:36.909 00:43:09 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.909 --rc genhtml_branch_coverage=1 00:05:36.909 --rc genhtml_function_coverage=1 00:05:36.909 --rc genhtml_legend=1 00:05:36.909 --rc geninfo_all_blocks=1 00:05:36.909 --rc geninfo_unexecuted_blocks=1 00:05:36.909 00:05:36.909 ' 00:05:36.909 00:43:09 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.909 --rc genhtml_branch_coverage=1 00:05:36.909 --rc genhtml_function_coverage=1 00:05:36.909 --rc genhtml_legend=1 00:05:36.909 --rc geninfo_all_blocks=1 00:05:36.909 --rc geninfo_unexecuted_blocks=1 00:05:36.909 00:05:36.909 ' 00:05:36.909 00:43:09 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:36.909 00:43:09 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:36.909 00:43:09 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:36.909 00:43:09 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:36.909 00:43:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.909 00:43:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.909 ************************************ 00:05:36.909 START TEST event_perf 00:05:36.909 ************************************ 00:05:36.909 00:43:09 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:36.909 Running I/O for 1 seconds...[2024-12-06 00:43:09.577781] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:36.909 [2024-12-06 00:43:09.577915] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2824869 ] 00:05:37.167 [2024-12-06 00:43:09.718907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:37.167 [2024-12-06 00:43:09.864748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.167 [2024-12-06 00:43:09.864824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.167 [2024-12-06 00:43:09.864909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.168 [2024-12-06 00:43:09.864918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:38.540 Running I/O for 1 seconds... 00:05:38.540 lcore 0: 221109 00:05:38.540 lcore 1: 221109 00:05:38.540 lcore 2: 221110 00:05:38.540 lcore 3: 221109 00:05:38.540 done. 00:05:38.540 00:05:38.540 real 0m1.590s 00:05:38.540 user 0m4.420s 00:05:38.540 sys 0m0.155s 00:05:38.540 00:43:11 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.540 00:43:11 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:38.540 ************************************ 00:05:38.540 END TEST event_perf 00:05:38.540 ************************************ 00:05:38.540 00:43:11 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:38.540 00:43:11 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:38.540 00:43:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.540 00:43:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.540 ************************************ 00:05:38.540 START TEST event_reactor 00:05:38.540 ************************************ 00:05:38.540 00:43:11 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:38.540 [2024-12-06 00:43:11.217521] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:38.540 [2024-12-06 00:43:11.217655] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825027 ] 00:05:38.798 [2024-12-06 00:43:11.368810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.054 [2024-12-06 00:43:11.507240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.427 test_start 00:05:40.427 oneshot 00:05:40.427 tick 100 00:05:40.427 tick 100 00:05:40.427 tick 250 00:05:40.427 tick 100 00:05:40.427 tick 100 00:05:40.427 tick 100 00:05:40.427 tick 250 00:05:40.427 tick 500 00:05:40.427 tick 100 00:05:40.427 tick 100 00:05:40.427 tick 250 00:05:40.427 tick 100 00:05:40.427 tick 100 00:05:40.427 test_end 00:05:40.427 00:05:40.427 real 0m1.581s 00:05:40.427 user 0m1.430s 00:05:40.427 sys 0m0.142s 00:05:40.427 00:43:12 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.427 00:43:12 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:40.427 ************************************ 00:05:40.427 END TEST event_reactor 00:05:40.427 ************************************ 00:05:40.427 00:43:12 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:40.427 00:43:12 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:40.427 00:43:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.427 00:43:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.427 ************************************ 00:05:40.427 START TEST event_reactor_perf 00:05:40.427 ************************************ 00:05:40.427 00:43:12 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:40.427 [2024-12-06 00:43:12.844781] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:40.428 [2024-12-06 00:43:12.844934] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825311 ] 00:05:40.428 [2024-12-06 00:43:13.001922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.685 [2024-12-06 00:43:13.140702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.057 test_start 00:05:42.057 test_end 00:05:42.057 Performance: 268067 events per second 00:05:42.057 00:05:42.057 real 0m1.585s 00:05:42.057 user 0m1.424s 00:05:42.057 sys 0m0.152s 00:05:42.057 00:43:14 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.057 00:43:14 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.057 ************************************ 00:05:42.057 END TEST event_reactor_perf 00:05:42.057 ************************************ 00:05:42.057 00:43:14 event -- event/event.sh@49 -- # uname -s 00:05:42.057 00:43:14 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:42.057 00:43:14 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:42.057 00:43:14 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.057 00:43:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.057 00:43:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.057 ************************************ 00:05:42.057 START TEST event_scheduler 00:05:42.057 ************************************ 00:05:42.057 00:43:14 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:42.057 * Looking for test storage... 00:05:42.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:42.057 00:43:14 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:42.057 00:43:14 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:42.057 00:43:14 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:42.057 00:43:14 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.057 00:43:14 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:42.057 00:43:14 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.057 00:43:14 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:42.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.057 --rc genhtml_branch_coverage=1 00:05:42.057 --rc genhtml_function_coverage=1 00:05:42.057 --rc genhtml_legend=1 00:05:42.057 --rc geninfo_all_blocks=1 00:05:42.057 --rc geninfo_unexecuted_blocks=1 00:05:42.057 00:05:42.057 ' 00:05:42.058 00:43:14 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:42.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.058 --rc genhtml_branch_coverage=1 00:05:42.058 --rc genhtml_function_coverage=1 00:05:42.058 --rc genhtml_legend=1 00:05:42.058 --rc geninfo_all_blocks=1 00:05:42.058 --rc geninfo_unexecuted_blocks=1 00:05:42.058 00:05:42.058 ' 00:05:42.058 00:43:14 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:42.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.058 --rc genhtml_branch_coverage=1 00:05:42.058 --rc genhtml_function_coverage=1 00:05:42.058 --rc genhtml_legend=1 00:05:42.058 --rc geninfo_all_blocks=1 00:05:42.058 --rc geninfo_unexecuted_blocks=1 00:05:42.058 00:05:42.058 ' 00:05:42.058 00:43:14 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:42.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.058 --rc genhtml_branch_coverage=1 00:05:42.058 --rc genhtml_function_coverage=1 00:05:42.058 --rc genhtml_legend=1 00:05:42.058 --rc geninfo_all_blocks=1 00:05:42.058 --rc geninfo_unexecuted_blocks=1 00:05:42.058 00:05:42.058 ' 00:05:42.058 00:43:14 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:42.058 00:43:14 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2825511 00:05:42.058 00:43:14 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:42.058 00:43:14 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.058 00:43:14 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2825511 00:05:42.058 00:43:14 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2825511 ']' 00:05:42.058 00:43:14 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.058 00:43:14 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.058 00:43:14 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.058 00:43:14 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.058 00:43:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.058 [2024-12-06 00:43:14.667256] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:42.058 [2024-12-06 00:43:14.667408] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825511 ] 00:05:42.316 [2024-12-06 00:43:14.803507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:42.316 [2024-12-06 00:43:14.923963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.316 [2024-12-06 00:43:14.924027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.316 [2024-12-06 00:43:14.924066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.316 [2024-12-06 00:43:14.924076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.248 00:43:15 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.248 00:43:15 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:43.248 00:43:15 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:43.248 00:43:15 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.248 00:43:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.248 [2024-12-06 00:43:15.627308] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:43.248 [2024-12-06 00:43:15.627354] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:43.248 [2024-12-06 00:43:15.627401] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:43.248 [2024-12-06 00:43:15.627432] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:43.248 [2024-12-06 00:43:15.627463] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:43.248 00:43:15 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.248 00:43:15 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:43.248 00:43:15 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.248 00:43:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.506 [2024-12-06 00:43:15.960035] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:43.506 00:43:15 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.506 00:43:15 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:43.506 00:43:15 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.506 00:43:15 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.506 00:43:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.506 ************************************ 00:05:43.506 START TEST scheduler_create_thread 00:05:43.506 ************************************ 00:05:43.506 00:43:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:43.506 00:43:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:43.506 00:43:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.506 00:43:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.506 2 00:05:43.506 00:43:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.506 3 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.506 4 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.506 5 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.506 6 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.506 7 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.506 8 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.506 9 00:05:43.506 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.507 10 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.507 00:05:43.507 real 0m0.109s 00:05:43.507 user 0m0.010s 00:05:43.507 sys 0m0.003s 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.507 00:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.507 ************************************ 00:05:43.507 END TEST scheduler_create_thread 00:05:43.507 ************************************ 00:05:43.507 00:43:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:43.507 00:43:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2825511 00:05:43.507 00:43:16 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2825511 ']' 00:05:43.507 00:43:16 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2825511 00:05:43.507 00:43:16 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:43.507 00:43:16 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.507 00:43:16 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2825511 00:05:43.507 00:43:16 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:43.507 00:43:16 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:43.507 00:43:16 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2825511' 00:05:43.507 killing process with pid 2825511 00:05:43.507 00:43:16 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2825511 00:05:43.507 00:43:16 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2825511 00:05:44.073 [2024-12-06 00:43:16.582911] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:45.010 00:05:45.010 real 0m3.132s 00:05:45.010 user 0m5.395s 00:05:45.010 sys 0m0.533s 00:05:45.011 00:43:17 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.011 00:43:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.011 ************************************ 00:05:45.011 END TEST event_scheduler 00:05:45.011 ************************************ 00:05:45.011 00:43:17 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:45.011 00:43:17 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:45.011 00:43:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.011 00:43:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.011 00:43:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.011 ************************************ 00:05:45.011 START TEST app_repeat 00:05:45.011 ************************************ 00:05:45.011 00:43:17 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:45.011 00:43:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.011 00:43:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.011 00:43:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:45.011 00:43:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.011 00:43:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:45.011 00:43:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:45.011 00:43:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:45.011 00:43:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2825954 00:05:45.011 00:43:17 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:45.011 00:43:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.011 00:43:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2825954' 00:05:45.011 Process app_repeat pid: 2825954 00:05:45.011 00:43:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.011 00:43:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:45.011 spdk_app_start Round 0 00:05:45.011 00:43:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2825954 /var/tmp/spdk-nbd.sock 00:05:45.011 00:43:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2825954 ']' 00:05:45.011 00:43:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.011 00:43:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.011 00:43:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.011 00:43:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.011 00:43:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.011 [2024-12-06 00:43:17.679082] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:45.011 [2024-12-06 00:43:17.679221] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825954 ] 00:05:45.270 [2024-12-06 00:43:17.821048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.270 [2024-12-06 00:43:17.962719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.270 [2024-12-06 00:43:17.962726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.204 00:43:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.204 00:43:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:46.204 00:43:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.462 Malloc0 00:05:46.462 00:43:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.721 Malloc1 00:05:46.721 00:43:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.721 00:43:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.721 00:43:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.721 00:43:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.721 00:43:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.721 00:43:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.721 00:43:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.721 00:43:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.721 00:43:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.721 00:43:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.721 00:43:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.721 00:43:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.721 00:43:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.721 00:43:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.721 00:43:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.721 00:43:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.980 /dev/nbd0 00:05:46.980 00:43:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.980 00:43:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.980 00:43:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:46.980 00:43:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:46.980 00:43:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:46.980 00:43:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:46.980 00:43:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:46.980 00:43:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:46.980 00:43:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:46.980 00:43:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:46.980 00:43:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.980 1+0 records in 00:05:46.980 1+0 records out 00:05:46.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192655 s, 21.3 MB/s 00:05:47.238 00:43:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.238 00:43:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:47.238 00:43:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.238 00:43:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:47.238 00:43:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:47.238 00:43:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.238 00:43:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.238 00:43:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:47.495 /dev/nbd1 00:05:47.495 00:43:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:47.495 00:43:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:47.495 00:43:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:47.495 00:43:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:47.495 00:43:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:47.495 00:43:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:47.495 00:43:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:47.495 00:43:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:47.495 00:43:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:47.495 00:43:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:47.495 00:43:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.495 1+0 records in 00:05:47.495 1+0 records out 00:05:47.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259511 s, 15.8 MB/s 00:05:47.495 00:43:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.495 00:43:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:47.495 00:43:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.495 00:43:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:47.496 00:43:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:47.496 00:43:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.496 00:43:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.496 00:43:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.496 00:43:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.496 00:43:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.754 { 00:05:47.754 "nbd_device": "/dev/nbd0", 00:05:47.754 "bdev_name": "Malloc0" 00:05:47.754 }, 00:05:47.754 { 00:05:47.754 "nbd_device": "/dev/nbd1", 00:05:47.754 "bdev_name": "Malloc1" 00:05:47.754 } 00:05:47.754 ]' 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.754 { 00:05:47.754 "nbd_device": "/dev/nbd0", 00:05:47.754 "bdev_name": "Malloc0" 00:05:47.754 }, 00:05:47.754 { 00:05:47.754 "nbd_device": "/dev/nbd1", 00:05:47.754 "bdev_name": "Malloc1" 00:05:47.754 } 00:05:47.754 ]' 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:47.754 /dev/nbd1' 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:47.754 /dev/nbd1' 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.754 256+0 records in 00:05:47.754 256+0 records out 00:05:47.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0051003 s, 206 MB/s 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.754 256+0 records in 00:05:47.754 256+0 records out 00:05:47.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278715 s, 37.6 MB/s 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.754 256+0 records in 00:05:47.754 256+0 records out 00:05:47.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030681 s, 34.2 MB/s 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.754 00:43:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.755 00:43:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.755 00:43:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.755 00:43:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.755 00:43:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.755 00:43:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.755 00:43:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.755 00:43:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:47.755 00:43:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.755 00:43:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:48.326 00:43:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:48.326 00:43:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:48.326 00:43:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:48.326 00:43:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.326 00:43:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.326 00:43:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:48.326 00:43:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.326 00:43:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.326 00:43:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.326 00:43:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:48.326 00:43:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:48.326 00:43:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:48.326 00:43:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:48.326 00:43:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.326 00:43:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.326 00:43:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:48.326 00:43:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.326 00:43:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.326 00:43:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.326 00:43:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.326 00:43:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.893 00:43:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:48.893 00:43:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:48.893 00:43:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.893 00:43:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:48.893 00:43:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:48.893 00:43:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.893 00:43:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:48.893 00:43:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:48.893 00:43:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:48.893 00:43:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:48.893 00:43:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:48.893 00:43:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:48.893 00:43:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.151 00:43:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:50.529 [2024-12-06 00:43:23.030003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.529 [2024-12-06 00:43:23.164723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.529 [2024-12-06 00:43:23.164727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.824 [2024-12-06 00:43:23.380581] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.824 [2024-12-06 00:43:23.380675] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:52.222 00:43:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:52.222 00:43:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:52.222 spdk_app_start Round 1 00:05:52.222 00:43:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2825954 /var/tmp/spdk-nbd.sock 00:05:52.222 00:43:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2825954 ']' 00:05:52.222 00:43:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.222 00:43:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.222 00:43:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.222 00:43:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.222 00:43:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.480 00:43:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.480 00:43:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:52.480 00:43:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.737 Malloc0 00:05:52.737 00:43:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.301 Malloc1 00:05:53.301 00:43:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.301 00:43:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.301 00:43:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.301 00:43:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.301 00:43:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.301 00:43:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.301 00:43:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.301 00:43:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.301 00:43:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.301 00:43:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.301 00:43:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.301 00:43:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.301 00:43:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:53.301 00:43:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.301 00:43:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.301 00:43:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:53.558 /dev/nbd0 00:05:53.559 00:43:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:53.559 00:43:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:53.559 00:43:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:53.559 00:43:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:53.559 00:43:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:53.559 00:43:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:53.559 00:43:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:53.559 00:43:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:53.559 00:43:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:53.559 00:43:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:53.559 00:43:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.559 1+0 records in 00:05:53.559 1+0 records out 00:05:53.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017284 s, 23.7 MB/s 00:05:53.559 00:43:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.559 00:43:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:53.559 00:43:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.559 00:43:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:53.559 00:43:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:53.559 00:43:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.559 00:43:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.559 00:43:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:53.816 /dev/nbd1 00:05:53.816 00:43:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:53.816 00:43:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:53.816 00:43:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:53.816 00:43:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:53.816 00:43:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:53.816 00:43:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:53.816 00:43:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:53.816 00:43:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:53.816 00:43:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:53.816 00:43:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:53.816 00:43:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.816 1+0 records in 00:05:53.816 1+0 records out 00:05:53.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230342 s, 17.8 MB/s 00:05:53.816 00:43:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.816 00:43:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:53.816 00:43:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.816 00:43:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:53.816 00:43:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:53.816 00:43:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.816 00:43:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.816 00:43:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.816 00:43:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.816 00:43:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.075 { 00:05:54.075 "nbd_device": "/dev/nbd0", 00:05:54.075 "bdev_name": "Malloc0" 00:05:54.075 }, 00:05:54.075 { 00:05:54.075 "nbd_device": "/dev/nbd1", 00:05:54.075 "bdev_name": "Malloc1" 00:05:54.075 } 00:05:54.075 ]' 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.075 { 00:05:54.075 "nbd_device": "/dev/nbd0", 00:05:54.075 "bdev_name": "Malloc0" 00:05:54.075 }, 00:05:54.075 { 00:05:54.075 "nbd_device": "/dev/nbd1", 00:05:54.075 "bdev_name": "Malloc1" 00:05:54.075 } 00:05:54.075 ]' 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.075 /dev/nbd1' 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.075 /dev/nbd1' 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.075 256+0 records in 00:05:54.075 256+0 records out 00:05:54.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00507869 s, 206 MB/s 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.075 256+0 records in 00:05:54.075 256+0 records out 00:05:54.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255346 s, 41.1 MB/s 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.075 256+0 records in 00:05:54.075 256+0 records out 00:05:54.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299844 s, 35.0 MB/s 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.075 00:43:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.332 00:43:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.333 00:43:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.333 00:43:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.333 00:43:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.333 00:43:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.333 00:43:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:54.333 00:43:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.333 00:43:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.590 00:43:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.590 00:43:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.590 00:43:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.590 00:43:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.590 00:43:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.590 00:43:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.590 00:43:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.590 00:43:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.590 00:43:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.590 00:43:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:54.847 00:43:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:54.847 00:43:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:54.847 00:43:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:54.847 00:43:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.847 00:43:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.847 00:43:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:54.847 00:43:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.847 00:43:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.847 00:43:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.847 00:43:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.847 00:43:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.105 00:43:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.105 00:43:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.105 00:43:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.105 00:43:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.105 00:43:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.105 00:43:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.105 00:43:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.105 00:43:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.105 00:43:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.105 00:43:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.105 00:43:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.105 00:43:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.105 00:43:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:55.672 00:43:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.045 [2024-12-06 00:43:29.374065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.045 [2024-12-06 00:43:29.509109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.045 [2024-12-06 00:43:29.509109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.045 [2024-12-06 00:43:29.723413] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.045 [2024-12-06 00:43:29.723527] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:58.942 00:43:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:58.942 00:43:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:58.942 spdk_app_start Round 2 00:05:58.942 00:43:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2825954 /var/tmp/spdk-nbd.sock 00:05:58.942 00:43:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2825954 ']' 00:05:58.942 00:43:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.943 00:43:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.943 00:43:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.943 00:43:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.943 00:43:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.943 00:43:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.943 00:43:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:58.943 00:43:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.200 Malloc0 00:05:59.200 00:43:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.458 Malloc1 00:05:59.458 00:43:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.458 00:43:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.458 00:43:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.458 00:43:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.458 00:43:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.458 00:43:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.458 00:43:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.458 00:43:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.458 00:43:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.458 00:43:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.458 00:43:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.458 00:43:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.458 00:43:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.458 00:43:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.458 00:43:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.458 00:43:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.715 /dev/nbd0 00:05:59.715 00:43:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.715 00:43:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.715 00:43:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:59.715 00:43:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:59.715 00:43:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.715 00:43:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.715 00:43:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:59.715 00:43:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:59.715 00:43:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.715 00:43:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.715 00:43:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.715 1+0 records in 00:05:59.715 1+0 records out 00:05:59.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282598 s, 14.5 MB/s 00:05:59.972 00:43:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.972 00:43:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:59.972 00:43:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.972 00:43:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.972 00:43:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:59.972 00:43:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.972 00:43:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.972 00:43:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.229 /dev/nbd1 00:06:00.229 00:43:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.229 00:43:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.229 00:43:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:00.229 00:43:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.229 00:43:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.229 00:43:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.229 00:43:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:00.229 00:43:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.229 00:43:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.229 00:43:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.229 00:43:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.229 1+0 records in 00:06:00.229 1+0 records out 00:06:00.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240672 s, 17.0 MB/s 00:06:00.229 00:43:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.229 00:43:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.229 00:43:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.229 00:43:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.229 00:43:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.229 00:43:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.229 00:43:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.229 00:43:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.229 00:43:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.229 00:43:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.486 { 00:06:00.486 "nbd_device": "/dev/nbd0", 00:06:00.486 "bdev_name": "Malloc0" 00:06:00.486 }, 00:06:00.486 { 00:06:00.486 "nbd_device": "/dev/nbd1", 00:06:00.486 "bdev_name": "Malloc1" 00:06:00.486 } 00:06:00.486 ]' 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.486 { 00:06:00.486 "nbd_device": "/dev/nbd0", 00:06:00.486 "bdev_name": "Malloc0" 00:06:00.486 }, 00:06:00.486 { 00:06:00.486 "nbd_device": "/dev/nbd1", 00:06:00.486 "bdev_name": "Malloc1" 00:06:00.486 } 00:06:00.486 ]' 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.486 /dev/nbd1' 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.486 /dev/nbd1' 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.486 256+0 records in 00:06:00.486 256+0 records out 00:06:00.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508595 s, 206 MB/s 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.486 256+0 records in 00:06:00.486 256+0 records out 00:06:00.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263938 s, 39.7 MB/s 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.486 256+0 records in 00:06:00.486 256+0 records out 00:06:00.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303252 s, 34.6 MB/s 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.486 00:43:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.050 00:43:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.050 00:43:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.050 00:43:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.050 00:43:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.050 00:43:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.050 00:43:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.050 00:43:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.050 00:43:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.050 00:43:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.050 00:43:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.308 00:43:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.308 00:43:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.308 00:43:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.308 00:43:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.308 00:43:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.308 00:43:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.308 00:43:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.308 00:43:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.308 00:43:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.308 00:43:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.308 00:43:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.565 00:43:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.565 00:43:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.565 00:43:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.565 00:43:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.565 00:43:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.565 00:43:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.565 00:43:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.565 00:43:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.565 00:43:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.565 00:43:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.565 00:43:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.565 00:43:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.565 00:43:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.129 00:43:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:03.062 [2024-12-06 00:43:35.769765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.320 [2024-12-06 00:43:35.904014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.320 [2024-12-06 00:43:35.904018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.578 [2024-12-06 00:43:36.115649] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.578 [2024-12-06 00:43:36.115739] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.952 00:43:37 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2825954 /var/tmp/spdk-nbd.sock 00:06:04.952 00:43:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2825954 ']' 00:06:04.952 00:43:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.952 00:43:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.952 00:43:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.952 00:43:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.952 00:43:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.210 00:43:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.210 00:43:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:05.210 00:43:37 event.app_repeat -- event/event.sh@39 -- # killprocess 2825954 00:06:05.210 00:43:37 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2825954 ']' 00:06:05.210 00:43:37 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2825954 00:06:05.210 00:43:37 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:05.210 00:43:37 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.210 00:43:37 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2825954 00:06:05.210 00:43:37 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.210 00:43:37 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.210 00:43:37 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2825954' 00:06:05.210 killing process with pid 2825954 00:06:05.210 00:43:37 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2825954 00:06:05.210 00:43:37 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2825954 00:06:06.584 spdk_app_start is called in Round 0. 00:06:06.584 Shutdown signal received, stop current app iteration 00:06:06.584 Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 reinitialization... 00:06:06.584 spdk_app_start is called in Round 1. 00:06:06.584 Shutdown signal received, stop current app iteration 00:06:06.584 Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 reinitialization... 00:06:06.584 spdk_app_start is called in Round 2. 00:06:06.584 Shutdown signal received, stop current app iteration 00:06:06.584 Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 reinitialization... 00:06:06.584 spdk_app_start is called in Round 3. 00:06:06.584 Shutdown signal received, stop current app iteration 00:06:06.584 00:43:38 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:06.584 00:43:38 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:06.584 00:06:06.584 real 0m21.296s 00:06:06.584 user 0m45.280s 00:06:06.584 sys 0m3.457s 00:06:06.584 00:43:38 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.584 00:43:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.584 ************************************ 00:06:06.584 END TEST app_repeat 00:06:06.584 ************************************ 00:06:06.584 00:43:38 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:06.584 00:43:38 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:06.584 00:43:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.584 00:43:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.584 00:43:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.584 ************************************ 00:06:06.584 START TEST cpu_locks 00:06:06.584 ************************************ 00:06:06.584 00:43:38 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:06.584 * Looking for test storage... 00:06:06.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:06.584 00:43:39 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:06.584 00:43:39 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:06.584 00:43:39 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:06.584 00:43:39 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.584 00:43:39 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:06.584 00:43:39 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.584 00:43:39 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:06.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.584 --rc genhtml_branch_coverage=1 00:06:06.584 --rc genhtml_function_coverage=1 00:06:06.584 --rc genhtml_legend=1 00:06:06.584 --rc geninfo_all_blocks=1 00:06:06.584 --rc geninfo_unexecuted_blocks=1 00:06:06.584 00:06:06.584 ' 00:06:06.584 00:43:39 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:06.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.584 --rc genhtml_branch_coverage=1 00:06:06.584 --rc genhtml_function_coverage=1 00:06:06.584 --rc genhtml_legend=1 00:06:06.584 --rc geninfo_all_blocks=1 00:06:06.584 --rc geninfo_unexecuted_blocks=1 00:06:06.584 00:06:06.584 ' 00:06:06.584 00:43:39 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:06.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.584 --rc genhtml_branch_coverage=1 00:06:06.584 --rc genhtml_function_coverage=1 00:06:06.584 --rc genhtml_legend=1 00:06:06.584 --rc geninfo_all_blocks=1 00:06:06.584 --rc geninfo_unexecuted_blocks=1 00:06:06.584 00:06:06.584 ' 00:06:06.584 00:43:39 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:06.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.584 --rc genhtml_branch_coverage=1 00:06:06.584 --rc genhtml_function_coverage=1 00:06:06.584 --rc genhtml_legend=1 00:06:06.584 --rc geninfo_all_blocks=1 00:06:06.585 --rc geninfo_unexecuted_blocks=1 00:06:06.585 00:06:06.585 ' 00:06:06.585 00:43:39 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:06.585 00:43:39 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:06.585 00:43:39 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:06.585 00:43:39 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:06.585 00:43:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.585 00:43:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.585 00:43:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.585 ************************************ 00:06:06.585 START TEST default_locks 00:06:06.585 ************************************ 00:06:06.585 00:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:06.585 00:43:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2828715 00:06:06.585 00:43:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.585 00:43:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2828715 00:06:06.585 00:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2828715 ']' 00:06:06.585 00:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.585 00:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.585 00:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.585 00:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.585 00:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.585 [2024-12-06 00:43:39.247064] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:06.585 [2024-12-06 00:43:39.247228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828715 ] 00:06:06.843 [2024-12-06 00:43:39.389666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.843 [2024-12-06 00:43:39.521764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.218 00:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.218 00:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:08.218 00:43:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2828715 00:06:08.218 00:43:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2828715 00:06:08.218 00:43:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.482 lslocks: write error 00:06:08.482 00:43:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2828715 00:06:08.482 00:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2828715 ']' 00:06:08.482 00:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2828715 00:06:08.482 00:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:08.482 00:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.482 00:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2828715 00:06:08.482 00:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.482 00:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.482 00:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2828715' 00:06:08.482 killing process with pid 2828715 00:06:08.482 00:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2828715 00:06:08.482 00:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2828715 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2828715 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2828715 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2828715 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2828715 ']' 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2828715) - No such process 00:06:11.015 ERROR: process (pid: 2828715) is no longer running 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:11.015 00:06:11.015 real 0m4.331s 00:06:11.015 user 0m4.319s 00:06:11.015 sys 0m0.772s 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.015 00:43:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.015 ************************************ 00:06:11.015 END TEST default_locks 00:06:11.015 ************************************ 00:06:11.015 00:43:43 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:11.015 00:43:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.015 00:43:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.015 00:43:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.015 ************************************ 00:06:11.015 START TEST default_locks_via_rpc 00:06:11.015 ************************************ 00:06:11.015 00:43:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:11.015 00:43:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2829277 00:06:11.015 00:43:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.015 00:43:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2829277 00:06:11.015 00:43:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2829277 ']' 00:06:11.015 00:43:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.015 00:43:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.015 00:43:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.015 00:43:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.015 00:43:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.015 [2024-12-06 00:43:43.633410] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:11.015 [2024-12-06 00:43:43.633588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829277 ] 00:06:11.274 [2024-12-06 00:43:43.778905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.274 [2024-12-06 00:43:43.915983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.209 00:43:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.209 00:43:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:12.209 00:43:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:12.209 00:43:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.209 00:43:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.209 00:43:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.209 00:43:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:12.209 00:43:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:12.209 00:43:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:12.209 00:43:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:12.209 00:43:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:12.209 00:43:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.209 00:43:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.209 00:43:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.209 00:43:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2829277 00:06:12.209 00:43:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2829277 00:06:12.209 00:43:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.467 00:43:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2829277 00:06:12.467 00:43:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2829277 ']' 00:06:12.467 00:43:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2829277 00:06:12.467 00:43:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:12.467 00:43:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.467 00:43:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2829277 00:06:12.467 00:43:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.467 00:43:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.467 00:43:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2829277' 00:06:12.467 killing process with pid 2829277 00:06:12.467 00:43:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2829277 00:06:12.467 00:43:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2829277 00:06:14.993 00:06:14.993 real 0m4.021s 00:06:14.993 user 0m4.021s 00:06:14.993 sys 0m0.764s 00:06:14.993 00:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.993 00:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.993 ************************************ 00:06:14.993 END TEST default_locks_via_rpc 00:06:14.993 ************************************ 00:06:14.993 00:43:47 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:14.993 00:43:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.993 00:43:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.993 00:43:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.993 ************************************ 00:06:14.993 START TEST non_locking_app_on_locked_coremask 00:06:14.993 ************************************ 00:06:14.993 00:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:14.993 00:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2829830 00:06:14.993 00:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.993 00:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2829830 /var/tmp/spdk.sock 00:06:14.993 00:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2829830 ']' 00:06:14.993 00:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.993 00:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.993 00:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.993 00:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.993 00:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.251 [2024-12-06 00:43:47.705019] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:15.251 [2024-12-06 00:43:47.705200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829830 ] 00:06:15.251 [2024-12-06 00:43:47.842082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.509 [2024-12-06 00:43:47.975876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.444 00:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.444 00:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:16.444 00:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2829974 00:06:16.444 00:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:16.444 00:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2829974 /var/tmp/spdk2.sock 00:06:16.444 00:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2829974 ']' 00:06:16.444 00:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.445 00:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.445 00:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.445 00:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.445 00:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.445 [2024-12-06 00:43:49.029630] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:16.445 [2024-12-06 00:43:49.029784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829974 ] 00:06:16.702 [2024-12-06 00:43:49.237534] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.702 [2024-12-06 00:43:49.237618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.961 [2024-12-06 00:43:49.517338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.493 00:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.493 00:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:19.493 00:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2829830 00:06:19.493 00:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2829830 00:06:19.493 00:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.493 lslocks: write error 00:06:19.493 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2829830 00:06:19.493 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2829830 ']' 00:06:19.493 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2829830 00:06:19.493 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:19.493 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.493 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2829830 00:06:19.493 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.493 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.493 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2829830' 00:06:19.493 killing process with pid 2829830 00:06:19.493 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2829830 00:06:19.493 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2829830 00:06:24.754 00:43:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2829974 00:06:24.754 00:43:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2829974 ']' 00:06:24.754 00:43:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2829974 00:06:24.754 00:43:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:24.754 00:43:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.754 00:43:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2829974 00:06:24.754 00:43:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.754 00:43:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.754 00:43:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2829974' 00:06:24.754 killing process with pid 2829974 00:06:24.754 00:43:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2829974 00:06:24.754 00:43:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2829974 00:06:27.284 00:06:27.284 real 0m11.847s 00:06:27.284 user 0m12.173s 00:06:27.284 sys 0m1.471s 00:06:27.284 00:43:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.284 00:43:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.284 ************************************ 00:06:27.284 END TEST non_locking_app_on_locked_coremask 00:06:27.284 ************************************ 00:06:27.284 00:43:59 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:27.284 00:43:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.284 00:43:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.284 00:43:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.284 ************************************ 00:06:27.284 START TEST locking_app_on_unlocked_coremask 00:06:27.284 ************************************ 00:06:27.284 00:43:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:27.284 00:43:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2831210 00:06:27.284 00:43:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:27.284 00:43:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2831210 /var/tmp/spdk.sock 00:06:27.284 00:43:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2831210 ']' 00:06:27.284 00:43:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.284 00:43:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.284 00:43:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.284 00:43:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.284 00:43:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.284 [2024-12-06 00:43:59.603435] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:27.284 [2024-12-06 00:43:59.603612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831210 ] 00:06:27.284 [2024-12-06 00:43:59.738826] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.284 [2024-12-06 00:43:59.738911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.284 [2024-12-06 00:43:59.868306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.250 00:44:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.250 00:44:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:28.250 00:44:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2831457 00:06:28.250 00:44:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:28.250 00:44:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2831457 /var/tmp/spdk2.sock 00:06:28.250 00:44:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2831457 ']' 00:06:28.250 00:44:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.250 00:44:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.250 00:44:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.250 00:44:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.250 00:44:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.250 [2024-12-06 00:44:00.902028] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:28.250 [2024-12-06 00:44:00.902198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831457 ] 00:06:28.557 [2024-12-06 00:44:01.112114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.844 [2024-12-06 00:44:01.390871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.374 00:44:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.374 00:44:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:31.374 00:44:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2831457 00:06:31.374 00:44:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2831457 00:06:31.374 00:44:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.374 lslocks: write error 00:06:31.374 00:44:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2831210 00:06:31.374 00:44:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2831210 ']' 00:06:31.374 00:44:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2831210 00:06:31.374 00:44:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:31.374 00:44:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.374 00:44:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2831210 00:06:31.374 00:44:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.374 00:44:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.374 00:44:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2831210' 00:06:31.374 killing process with pid 2831210 00:06:31.374 00:44:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2831210 00:06:31.374 00:44:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2831210 00:06:36.643 00:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2831457 00:06:36.643 00:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2831457 ']' 00:06:36.643 00:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2831457 00:06:36.643 00:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:36.643 00:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.643 00:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2831457 00:06:36.643 00:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.644 00:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.644 00:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2831457' 00:06:36.644 killing process with pid 2831457 00:06:36.644 00:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2831457 00:06:36.644 00:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2831457 00:06:39.183 00:06:39.183 real 0m11.880s 00:06:39.183 user 0m12.234s 00:06:39.183 sys 0m1.447s 00:06:39.183 00:44:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.183 00:44:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.183 ************************************ 00:06:39.183 END TEST locking_app_on_unlocked_coremask 00:06:39.183 ************************************ 00:06:39.183 00:44:11 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:39.183 00:44:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.183 00:44:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.183 00:44:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.183 ************************************ 00:06:39.183 START TEST locking_app_on_locked_coremask 00:06:39.183 ************************************ 00:06:39.183 00:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:39.183 00:44:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2832711 00:06:39.183 00:44:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.183 00:44:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2832711 /var/tmp/spdk.sock 00:06:39.183 00:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2832711 ']' 00:06:39.183 00:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.183 00:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.183 00:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.183 00:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.183 00:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.183 [2024-12-06 00:44:11.533983] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:39.183 [2024-12-06 00:44:11.534120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832711 ] 00:06:39.183 [2024-12-06 00:44:11.678347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.183 [2024-12-06 00:44:11.818224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.120 00:44:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.120 00:44:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:40.120 00:44:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2832849 00:06:40.120 00:44:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:40.120 00:44:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2832849 /var/tmp/spdk2.sock 00:06:40.120 00:44:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:40.120 00:44:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2832849 /var/tmp/spdk2.sock 00:06:40.120 00:44:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:40.120 00:44:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.120 00:44:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:40.120 00:44:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.120 00:44:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2832849 /var/tmp/spdk2.sock 00:06:40.120 00:44:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2832849 ']' 00:06:40.120 00:44:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.120 00:44:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.120 00:44:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.120 00:44:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.120 00:44:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.377 [2024-12-06 00:44:12.884234] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:40.377 [2024-12-06 00:44:12.884368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832849 ] 00:06:40.637 [2024-12-06 00:44:13.097477] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2832711 has claimed it. 00:06:40.637 [2024-12-06 00:44:13.097566] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:40.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2832849) - No such process 00:06:40.896 ERROR: process (pid: 2832849) is no longer running 00:06:40.896 00:44:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.896 00:44:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:40.896 00:44:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:40.896 00:44:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:40.896 00:44:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:40.896 00:44:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:40.896 00:44:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2832711 00:06:40.896 00:44:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2832711 00:06:40.896 00:44:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.464 lslocks: write error 00:06:41.464 00:44:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2832711 00:06:41.464 00:44:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2832711 ']' 00:06:41.464 00:44:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2832711 00:06:41.464 00:44:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:41.464 00:44:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.464 00:44:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2832711 00:06:41.464 00:44:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.464 00:44:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.464 00:44:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2832711' 00:06:41.464 killing process with pid 2832711 00:06:41.465 00:44:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2832711 00:06:41.465 00:44:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2832711 00:06:44.000 00:06:44.000 real 0m5.096s 00:06:44.000 user 0m5.308s 00:06:44.000 sys 0m0.972s 00:06:44.000 00:44:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.000 00:44:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.000 ************************************ 00:06:44.000 END TEST locking_app_on_locked_coremask 00:06:44.000 ************************************ 00:06:44.000 00:44:16 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:44.000 00:44:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.000 00:44:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.000 00:44:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.000 ************************************ 00:06:44.000 START TEST locking_overlapped_coremask 00:06:44.000 ************************************ 00:06:44.000 00:44:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:44.000 00:44:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2833312 00:06:44.000 00:44:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:44.000 00:44:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2833312 /var/tmp/spdk.sock 00:06:44.000 00:44:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2833312 ']' 00:06:44.000 00:44:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.000 00:44:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.000 00:44:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.000 00:44:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.000 00:44:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.000 [2024-12-06 00:44:16.682000] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:44.000 [2024-12-06 00:44:16.682177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833312 ] 00:06:44.260 [2024-12-06 00:44:16.818679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.260 [2024-12-06 00:44:16.954056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.260 [2024-12-06 00:44:16.954119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.260 [2024-12-06 00:44:16.954124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.194 00:44:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.194 00:44:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:45.194 00:44:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2833542 00:06:45.194 00:44:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2833542 /var/tmp/spdk2.sock 00:06:45.194 00:44:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:45.194 00:44:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:45.194 00:44:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2833542 /var/tmp/spdk2.sock 00:06:45.194 00:44:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:45.194 00:44:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.194 00:44:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:45.194 00:44:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.194 00:44:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2833542 /var/tmp/spdk2.sock 00:06:45.194 00:44:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2833542 ']' 00:06:45.194 00:44:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.194 00:44:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.194 00:44:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.194 00:44:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.194 00:44:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.454 [2024-12-06 00:44:17.913663] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:45.454 [2024-12-06 00:44:17.913794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833542 ] 00:06:45.454 [2024-12-06 00:44:18.109233] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2833312 has claimed it. 00:06:45.454 [2024-12-06 00:44:18.109344] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:46.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2833542) - No such process 00:06:46.023 ERROR: process (pid: 2833542) is no longer running 00:06:46.023 00:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.023 00:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:46.023 00:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:46.023 00:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.023 00:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:46.023 00:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.023 00:44:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:46.023 00:44:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:46.024 00:44:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:46.024 00:44:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:46.024 00:44:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2833312 00:06:46.024 00:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2833312 ']' 00:06:46.024 00:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2833312 00:06:46.024 00:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:46.024 00:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.024 00:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2833312 00:06:46.024 00:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.024 00:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.024 00:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2833312' 00:06:46.024 killing process with pid 2833312 00:06:46.024 00:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2833312 00:06:46.024 00:44:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2833312 00:06:48.565 00:06:48.565 real 0m4.256s 00:06:48.565 user 0m11.601s 00:06:48.565 sys 0m0.773s 00:06:48.565 00:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.565 00:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.565 ************************************ 00:06:48.565 END TEST locking_overlapped_coremask 00:06:48.565 ************************************ 00:06:48.565 00:44:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:48.565 00:44:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.565 00:44:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.565 00:44:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.565 ************************************ 00:06:48.565 START TEST locking_overlapped_coremask_via_rpc 00:06:48.565 ************************************ 00:06:48.565 00:44:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:48.565 00:44:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2833850 00:06:48.565 00:44:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:48.565 00:44:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2833850 /var/tmp/spdk.sock 00:06:48.565 00:44:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2833850 ']' 00:06:48.565 00:44:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.565 00:44:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.565 00:44:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.565 00:44:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.565 00:44:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.565 [2024-12-06 00:44:20.991493] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:48.565 [2024-12-06 00:44:20.991630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833850 ] 00:06:48.565 [2024-12-06 00:44:21.138861] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.565 [2024-12-06 00:44:21.138922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.825 [2024-12-06 00:44:21.285921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.825 [2024-12-06 00:44:21.285970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.825 [2024-12-06 00:44:21.285979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.765 00:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.765 00:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:49.765 00:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2834074 00:06:49.765 00:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2834074 /var/tmp/spdk2.sock 00:06:49.765 00:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:49.765 00:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2834074 ']' 00:06:49.765 00:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.765 00:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.765 00:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.765 00:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.765 00:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.765 [2024-12-06 00:44:22.359547] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:49.765 [2024-12-06 00:44:22.359675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834074 ] 00:06:50.026 [2024-12-06 00:44:22.573477] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.026 [2024-12-06 00:44:22.573550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.286 [2024-12-06 00:44:22.863977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.286 [2024-12-06 00:44:22.867880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.286 [2024-12-06 00:44:22.867891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.826 [2024-12-06 00:44:25.077989] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2833850 has claimed it. 00:06:52.826 request: 00:06:52.826 { 00:06:52.826 "method": "framework_enable_cpumask_locks", 00:06:52.826 "req_id": 1 00:06:52.826 } 00:06:52.826 Got JSON-RPC error response 00:06:52.826 response: 00:06:52.826 { 00:06:52.826 "code": -32603, 00:06:52.826 "message": "Failed to claim CPU core: 2" 00:06:52.826 } 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2833850 /var/tmp/spdk.sock 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2833850 ']' 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2834074 /var/tmp/spdk2.sock 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2834074 ']' 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.826 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.084 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.084 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.084 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:53.084 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.084 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.084 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.084 00:06:53.084 real 0m4.755s 00:06:53.084 user 0m1.607s 00:06:53.084 sys 0m0.247s 00:06:53.084 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.084 00:44:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.084 ************************************ 00:06:53.084 END TEST locking_overlapped_coremask_via_rpc 00:06:53.084 ************************************ 00:06:53.084 00:44:25 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:53.084 00:44:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2833850 ]] 00:06:53.084 00:44:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2833850 00:06:53.084 00:44:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2833850 ']' 00:06:53.084 00:44:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2833850 00:06:53.084 00:44:25 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:53.084 00:44:25 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.084 00:44:25 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2833850 00:06:53.084 00:44:25 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.084 00:44:25 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.084 00:44:25 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2833850' 00:06:53.084 killing process with pid 2833850 00:06:53.084 00:44:25 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2833850 00:06:53.084 00:44:25 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2833850 00:06:55.620 00:44:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2834074 ]] 00:06:55.620 00:44:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2834074 00:06:55.620 00:44:27 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2834074 ']' 00:06:55.620 00:44:27 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2834074 00:06:55.620 00:44:27 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:55.620 00:44:27 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.620 00:44:27 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2834074 00:06:55.620 00:44:28 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:55.620 00:44:28 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:55.620 00:44:28 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2834074' 00:06:55.620 killing process with pid 2834074 00:06:55.620 00:44:28 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2834074 00:06:55.620 00:44:28 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2834074 00:06:57.530 00:44:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.530 00:44:30 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:57.530 00:44:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2833850 ]] 00:06:57.530 00:44:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2833850 00:06:57.530 00:44:30 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2833850 ']' 00:06:57.530 00:44:30 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2833850 00:06:57.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2833850) - No such process 00:06:57.530 00:44:30 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2833850 is not found' 00:06:57.530 Process with pid 2833850 is not found 00:06:57.530 00:44:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2834074 ]] 00:06:57.530 00:44:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2834074 00:06:57.530 00:44:30 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2834074 ']' 00:06:57.530 00:44:30 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2834074 00:06:57.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2834074) - No such process 00:06:57.530 00:44:30 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2834074 is not found' 00:06:57.530 Process with pid 2834074 is not found 00:06:57.530 00:44:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.530 00:06:57.530 real 0m51.198s 00:06:57.530 user 1m26.996s 00:06:57.530 sys 0m7.726s 00:06:57.530 00:44:30 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.530 00:44:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.530 ************************************ 00:06:57.530 END TEST cpu_locks 00:06:57.530 ************************************ 00:06:57.530 00:06:57.530 real 1m20.821s 00:06:57.530 user 2m25.159s 00:06:57.530 sys 0m12.416s 00:06:57.530 00:44:30 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.530 00:44:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.530 ************************************ 00:06:57.530 END TEST event 00:06:57.530 ************************************ 00:06:57.530 00:44:30 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:57.530 00:44:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.530 00:44:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.530 00:44:30 -- common/autotest_common.sh@10 -- # set +x 00:06:57.788 ************************************ 00:06:57.788 START TEST thread 00:06:57.788 ************************************ 00:06:57.788 00:44:30 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:57.788 * Looking for test storage... 00:06:57.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:57.788 00:44:30 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:57.788 00:44:30 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:57.788 00:44:30 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:57.788 00:44:30 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:57.788 00:44:30 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.788 00:44:30 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.788 00:44:30 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.788 00:44:30 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.788 00:44:30 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.788 00:44:30 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.788 00:44:30 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.788 00:44:30 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.788 00:44:30 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.788 00:44:30 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.788 00:44:30 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.788 00:44:30 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:57.788 00:44:30 thread -- scripts/common.sh@345 -- # : 1 00:06:57.788 00:44:30 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.788 00:44:30 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.788 00:44:30 thread -- scripts/common.sh@365 -- # decimal 1 00:06:57.788 00:44:30 thread -- scripts/common.sh@353 -- # local d=1 00:06:57.788 00:44:30 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.788 00:44:30 thread -- scripts/common.sh@355 -- # echo 1 00:06:57.788 00:44:30 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.788 00:44:30 thread -- scripts/common.sh@366 -- # decimal 2 00:06:57.788 00:44:30 thread -- scripts/common.sh@353 -- # local d=2 00:06:57.788 00:44:30 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.788 00:44:30 thread -- scripts/common.sh@355 -- # echo 2 00:06:57.788 00:44:30 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.788 00:44:30 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.788 00:44:30 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.788 00:44:30 thread -- scripts/common.sh@368 -- # return 0 00:06:57.788 00:44:30 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.788 00:44:30 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:57.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.788 --rc genhtml_branch_coverage=1 00:06:57.788 --rc genhtml_function_coverage=1 00:06:57.788 --rc genhtml_legend=1 00:06:57.788 --rc geninfo_all_blocks=1 00:06:57.788 --rc geninfo_unexecuted_blocks=1 00:06:57.788 00:06:57.788 ' 00:06:57.788 00:44:30 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:57.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.788 --rc genhtml_branch_coverage=1 00:06:57.788 --rc genhtml_function_coverage=1 00:06:57.788 --rc genhtml_legend=1 00:06:57.788 --rc geninfo_all_blocks=1 00:06:57.788 --rc geninfo_unexecuted_blocks=1 00:06:57.788 00:06:57.788 ' 00:06:57.788 00:44:30 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:57.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.788 --rc genhtml_branch_coverage=1 00:06:57.788 --rc genhtml_function_coverage=1 00:06:57.788 --rc genhtml_legend=1 00:06:57.788 --rc geninfo_all_blocks=1 00:06:57.788 --rc geninfo_unexecuted_blocks=1 00:06:57.788 00:06:57.788 ' 00:06:57.788 00:44:30 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:57.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.788 --rc genhtml_branch_coverage=1 00:06:57.788 --rc genhtml_function_coverage=1 00:06:57.788 --rc genhtml_legend=1 00:06:57.788 --rc geninfo_all_blocks=1 00:06:57.788 --rc geninfo_unexecuted_blocks=1 00:06:57.788 00:06:57.788 ' 00:06:57.788 00:44:30 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.788 00:44:30 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:57.788 00:44:30 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.788 00:44:30 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.788 ************************************ 00:06:57.788 START TEST thread_poller_perf 00:06:57.788 ************************************ 00:06:57.788 00:44:30 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.788 [2024-12-06 00:44:30.435137] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:57.788 [2024-12-06 00:44:30.435264] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835155 ] 00:06:58.048 [2024-12-06 00:44:30.581724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.048 [2024-12-06 00:44:30.719320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.048 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:59.427 [2024-12-05T23:44:32.136Z] ====================================== 00:06:59.427 [2024-12-05T23:44:32.136Z] busy:2717038823 (cyc) 00:06:59.427 [2024-12-05T23:44:32.136Z] total_run_count: 286000 00:06:59.427 [2024-12-05T23:44:32.136Z] tsc_hz: 2700000000 (cyc) 00:06:59.427 [2024-12-05T23:44:32.136Z] ====================================== 00:06:59.427 [2024-12-05T23:44:32.136Z] poller_cost: 9500 (cyc), 3518 (nsec) 00:06:59.427 00:06:59.427 real 0m1.582s 00:06:59.427 user 0m1.430s 00:06:59.427 sys 0m0.144s 00:06:59.427 00:44:31 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.427 00:44:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.427 ************************************ 00:06:59.427 END TEST thread_poller_perf 00:06:59.427 ************************************ 00:06:59.427 00:44:32 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.427 00:44:32 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:59.427 00:44:32 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.427 00:44:32 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.427 ************************************ 00:06:59.427 START TEST thread_poller_perf 00:06:59.427 ************************************ 00:06:59.427 00:44:32 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.427 [2024-12-06 00:44:32.074165] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:59.427 [2024-12-06 00:44:32.074280] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835315 ] 00:06:59.686 [2024-12-06 00:44:32.215904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.686 [2024-12-06 00:44:32.353931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.686 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:01.067 [2024-12-05T23:44:33.776Z] ====================================== 00:07:01.067 [2024-12-05T23:44:33.776Z] busy:2705246136 (cyc) 00:07:01.067 [2024-12-05T23:44:33.776Z] total_run_count: 3363000 00:07:01.067 [2024-12-05T23:44:33.776Z] tsc_hz: 2700000000 (cyc) 00:07:01.067 [2024-12-05T23:44:33.776Z] ====================================== 00:07:01.067 [2024-12-05T23:44:33.776Z] poller_cost: 804 (cyc), 297 (nsec) 00:07:01.067 00:07:01.067 real 0m1.577s 00:07:01.067 user 0m1.425s 00:07:01.067 sys 0m0.143s 00:07:01.067 00:44:33 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.067 00:44:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.067 ************************************ 00:07:01.067 END TEST thread_poller_perf 00:07:01.067 ************************************ 00:07:01.067 00:44:33 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:01.067 00:07:01.067 real 0m3.399s 00:07:01.067 user 0m2.988s 00:07:01.067 sys 0m0.408s 00:07:01.067 00:44:33 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.067 00:44:33 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.067 ************************************ 00:07:01.067 END TEST thread 00:07:01.067 ************************************ 00:07:01.067 00:44:33 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:01.067 00:44:33 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:01.067 00:44:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.067 00:44:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.067 00:44:33 -- common/autotest_common.sh@10 -- # set +x 00:07:01.067 ************************************ 00:07:01.067 START TEST app_cmdline 00:07:01.067 ************************************ 00:07:01.067 00:44:33 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:01.067 * Looking for test storage... 00:07:01.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:01.067 00:44:33 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:01.067 00:44:33 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:01.067 00:44:33 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:01.328 00:44:33 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.328 00:44:33 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:01.328 00:44:33 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.328 00:44:33 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:01.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.328 --rc genhtml_branch_coverage=1 00:07:01.328 --rc genhtml_function_coverage=1 00:07:01.328 --rc genhtml_legend=1 00:07:01.328 --rc geninfo_all_blocks=1 00:07:01.328 --rc geninfo_unexecuted_blocks=1 00:07:01.328 00:07:01.328 ' 00:07:01.328 00:44:33 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:01.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.328 --rc genhtml_branch_coverage=1 00:07:01.328 --rc genhtml_function_coverage=1 00:07:01.328 --rc genhtml_legend=1 00:07:01.328 --rc geninfo_all_blocks=1 00:07:01.328 --rc geninfo_unexecuted_blocks=1 00:07:01.328 00:07:01.328 ' 00:07:01.328 00:44:33 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:01.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.328 --rc genhtml_branch_coverage=1 00:07:01.328 --rc genhtml_function_coverage=1 00:07:01.328 --rc genhtml_legend=1 00:07:01.328 --rc geninfo_all_blocks=1 00:07:01.328 --rc geninfo_unexecuted_blocks=1 00:07:01.328 00:07:01.328 ' 00:07:01.328 00:44:33 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:01.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.328 --rc genhtml_branch_coverage=1 00:07:01.328 --rc genhtml_function_coverage=1 00:07:01.328 --rc genhtml_legend=1 00:07:01.328 --rc geninfo_all_blocks=1 00:07:01.328 --rc geninfo_unexecuted_blocks=1 00:07:01.328 00:07:01.328 ' 00:07:01.328 00:44:33 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:01.328 00:44:33 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2835643 00:07:01.328 00:44:33 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:01.328 00:44:33 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2835643 00:07:01.328 00:44:33 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2835643 ']' 00:07:01.328 00:44:33 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.328 00:44:33 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.328 00:44:33 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.328 00:44:33 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.328 00:44:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.328 [2024-12-06 00:44:33.927389] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:07:01.328 [2024-12-06 00:44:33.927547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835643 ] 00:07:01.589 [2024-12-06 00:44:34.066726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.589 [2024-12-06 00:44:34.199974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.526 00:44:35 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.526 00:44:35 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:02.526 00:44:35 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:02.783 { 00:07:02.783 "version": "SPDK v25.01-pre git sha1 a5e6ecf28", 00:07:02.783 "fields": { 00:07:02.783 "major": 25, 00:07:02.783 "minor": 1, 00:07:02.783 "patch": 0, 00:07:02.783 "suffix": "-pre", 00:07:02.783 "commit": "a5e6ecf28" 00:07:02.783 } 00:07:02.783 } 00:07:02.783 00:44:35 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:02.783 00:44:35 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:02.783 00:44:35 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:02.783 00:44:35 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:02.783 00:44:35 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:02.783 00:44:35 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.783 00:44:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.783 00:44:35 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:02.783 00:44:35 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:02.783 00:44:35 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.041 00:44:35 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:03.041 00:44:35 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:03.041 00:44:35 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.041 00:44:35 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:03.041 00:44:35 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.041 00:44:35 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:03.041 00:44:35 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.041 00:44:35 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:03.041 00:44:35 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.041 00:44:35 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:03.041 00:44:35 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.041 00:44:35 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:03.041 00:44:35 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:03.041 00:44:35 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.299 request: 00:07:03.299 { 00:07:03.299 "method": "env_dpdk_get_mem_stats", 00:07:03.299 "req_id": 1 00:07:03.299 } 00:07:03.299 Got JSON-RPC error response 00:07:03.299 response: 00:07:03.299 { 00:07:03.299 "code": -32601, 00:07:03.299 "message": "Method not found" 00:07:03.299 } 00:07:03.299 00:44:35 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:03.299 00:44:35 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:03.299 00:44:35 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:03.299 00:44:35 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:03.299 00:44:35 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2835643 00:07:03.299 00:44:35 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2835643 ']' 00:07:03.299 00:44:35 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2835643 00:07:03.299 00:44:35 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:03.299 00:44:35 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.299 00:44:35 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2835643 00:07:03.299 00:44:35 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.299 00:44:35 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.299 00:44:35 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2835643' 00:07:03.299 killing process with pid 2835643 00:07:03.299 00:44:35 app_cmdline -- common/autotest_common.sh@973 -- # kill 2835643 00:07:03.299 00:44:35 app_cmdline -- common/autotest_common.sh@978 -- # wait 2835643 00:07:05.833 00:07:05.833 real 0m4.593s 00:07:05.833 user 0m5.104s 00:07:05.833 sys 0m0.734s 00:07:05.833 00:44:38 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.833 00:44:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:05.833 ************************************ 00:07:05.833 END TEST app_cmdline 00:07:05.833 ************************************ 00:07:05.833 00:44:38 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:05.833 00:44:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.833 00:44:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.833 00:44:38 -- common/autotest_common.sh@10 -- # set +x 00:07:05.833 ************************************ 00:07:05.833 START TEST version 00:07:05.833 ************************************ 00:07:05.833 00:44:38 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:05.833 * Looking for test storage... 00:07:05.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:05.833 00:44:38 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:05.833 00:44:38 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:05.833 00:44:38 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:05.833 00:44:38 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:05.833 00:44:38 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.833 00:44:38 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.833 00:44:38 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.833 00:44:38 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.833 00:44:38 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.833 00:44:38 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.833 00:44:38 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.833 00:44:38 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.833 00:44:38 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.833 00:44:38 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.833 00:44:38 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.833 00:44:38 version -- scripts/common.sh@344 -- # case "$op" in 00:07:05.833 00:44:38 version -- scripts/common.sh@345 -- # : 1 00:07:05.833 00:44:38 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.833 00:44:38 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.833 00:44:38 version -- scripts/common.sh@365 -- # decimal 1 00:07:05.833 00:44:38 version -- scripts/common.sh@353 -- # local d=1 00:07:05.833 00:44:38 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.833 00:44:38 version -- scripts/common.sh@355 -- # echo 1 00:07:05.833 00:44:38 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.833 00:44:38 version -- scripts/common.sh@366 -- # decimal 2 00:07:05.833 00:44:38 version -- scripts/common.sh@353 -- # local d=2 00:07:05.833 00:44:38 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.833 00:44:38 version -- scripts/common.sh@355 -- # echo 2 00:07:05.833 00:44:38 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.833 00:44:38 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.833 00:44:38 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.833 00:44:38 version -- scripts/common.sh@368 -- # return 0 00:07:05.833 00:44:38 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.833 00:44:38 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:05.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.833 --rc genhtml_branch_coverage=1 00:07:05.833 --rc genhtml_function_coverage=1 00:07:05.833 --rc genhtml_legend=1 00:07:05.833 --rc geninfo_all_blocks=1 00:07:05.833 --rc geninfo_unexecuted_blocks=1 00:07:05.833 00:07:05.833 ' 00:07:05.833 00:44:38 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:05.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.833 --rc genhtml_branch_coverage=1 00:07:05.833 --rc genhtml_function_coverage=1 00:07:05.833 --rc genhtml_legend=1 00:07:05.833 --rc geninfo_all_blocks=1 00:07:05.833 --rc geninfo_unexecuted_blocks=1 00:07:05.833 00:07:05.833 ' 00:07:05.833 00:44:38 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:05.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.833 --rc genhtml_branch_coverage=1 00:07:05.833 --rc genhtml_function_coverage=1 00:07:05.833 --rc genhtml_legend=1 00:07:05.833 --rc geninfo_all_blocks=1 00:07:05.833 --rc geninfo_unexecuted_blocks=1 00:07:05.833 00:07:05.833 ' 00:07:05.833 00:44:38 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:05.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.833 --rc genhtml_branch_coverage=1 00:07:05.833 --rc genhtml_function_coverage=1 00:07:05.833 --rc genhtml_legend=1 00:07:05.833 --rc geninfo_all_blocks=1 00:07:05.833 --rc geninfo_unexecuted_blocks=1 00:07:05.833 00:07:05.833 ' 00:07:05.833 00:44:38 version -- app/version.sh@17 -- # get_header_version major 00:07:05.833 00:44:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.833 00:44:38 version -- app/version.sh@14 -- # cut -f2 00:07:05.833 00:44:38 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.833 00:44:38 version -- app/version.sh@17 -- # major=25 00:07:05.833 00:44:38 version -- app/version.sh@18 -- # get_header_version minor 00:07:05.833 00:44:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.833 00:44:38 version -- app/version.sh@14 -- # cut -f2 00:07:05.833 00:44:38 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.833 00:44:38 version -- app/version.sh@18 -- # minor=1 00:07:05.833 00:44:38 version -- app/version.sh@19 -- # get_header_version patch 00:07:05.833 00:44:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.833 00:44:38 version -- app/version.sh@14 -- # cut -f2 00:07:05.833 00:44:38 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.833 00:44:38 version -- app/version.sh@19 -- # patch=0 00:07:05.833 00:44:38 version -- app/version.sh@20 -- # get_header_version suffix 00:07:05.833 00:44:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.833 00:44:38 version -- app/version.sh@14 -- # cut -f2 00:07:05.833 00:44:38 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.833 00:44:38 version -- app/version.sh@20 -- # suffix=-pre 00:07:05.833 00:44:38 version -- app/version.sh@22 -- # version=25.1 00:07:05.833 00:44:38 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:05.833 00:44:38 version -- app/version.sh@28 -- # version=25.1rc0 00:07:05.833 00:44:38 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:05.833 00:44:38 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:05.833 00:44:38 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:06.092 00:44:38 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:06.092 00:07:06.092 real 0m0.211s 00:07:06.092 user 0m0.155s 00:07:06.092 sys 0m0.083s 00:07:06.092 00:44:38 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.092 00:44:38 version -- common/autotest_common.sh@10 -- # set +x 00:07:06.092 ************************************ 00:07:06.092 END TEST version 00:07:06.092 ************************************ 00:07:06.092 00:44:38 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:06.092 00:44:38 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:06.092 00:44:38 -- spdk/autotest.sh@194 -- # uname -s 00:07:06.092 00:44:38 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:06.092 00:44:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:06.092 00:44:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:06.092 00:44:38 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:06.092 00:44:38 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:06.092 00:44:38 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:06.092 00:44:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.092 00:44:38 -- common/autotest_common.sh@10 -- # set +x 00:07:06.092 00:44:38 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:06.092 00:44:38 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:06.092 00:44:38 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:06.092 00:44:38 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:06.092 00:44:38 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:06.092 00:44:38 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:06.092 00:44:38 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:06.092 00:44:38 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.092 00:44:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.092 00:44:38 -- common/autotest_common.sh@10 -- # set +x 00:07:06.092 ************************************ 00:07:06.092 START TEST nvmf_tcp 00:07:06.092 ************************************ 00:07:06.092 00:44:38 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:06.092 * Looking for test storage... 00:07:06.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:06.092 00:44:38 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:06.092 00:44:38 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:06.092 00:44:38 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:06.092 00:44:38 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.092 00:44:38 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:06.092 00:44:38 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.092 00:44:38 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:06.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.092 --rc genhtml_branch_coverage=1 00:07:06.092 --rc genhtml_function_coverage=1 00:07:06.092 --rc genhtml_legend=1 00:07:06.092 --rc geninfo_all_blocks=1 00:07:06.092 --rc geninfo_unexecuted_blocks=1 00:07:06.092 00:07:06.092 ' 00:07:06.092 00:44:38 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:06.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.093 --rc genhtml_branch_coverage=1 00:07:06.093 --rc genhtml_function_coverage=1 00:07:06.093 --rc genhtml_legend=1 00:07:06.093 --rc geninfo_all_blocks=1 00:07:06.093 --rc geninfo_unexecuted_blocks=1 00:07:06.093 00:07:06.093 ' 00:07:06.093 00:44:38 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:06.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.093 --rc genhtml_branch_coverage=1 00:07:06.093 --rc genhtml_function_coverage=1 00:07:06.093 --rc genhtml_legend=1 00:07:06.093 --rc geninfo_all_blocks=1 00:07:06.093 --rc geninfo_unexecuted_blocks=1 00:07:06.093 00:07:06.093 ' 00:07:06.093 00:44:38 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:06.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.093 --rc genhtml_branch_coverage=1 00:07:06.093 --rc genhtml_function_coverage=1 00:07:06.093 --rc genhtml_legend=1 00:07:06.093 --rc geninfo_all_blocks=1 00:07:06.093 --rc geninfo_unexecuted_blocks=1 00:07:06.093 00:07:06.093 ' 00:07:06.093 00:44:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:06.093 00:44:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:06.093 00:44:38 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:06.093 00:44:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.093 00:44:38 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.093 00:44:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:06.093 ************************************ 00:07:06.093 START TEST nvmf_target_core 00:07:06.093 ************************************ 00:07:06.093 00:44:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:06.351 * Looking for test storage... 00:07:06.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:06.351 00:44:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:06.351 00:44:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:07:06.351 00:44:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:06.351 00:44:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:06.351 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.351 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.351 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.351 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.351 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.351 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.351 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.351 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.351 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:06.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.352 --rc genhtml_branch_coverage=1 00:07:06.352 --rc genhtml_function_coverage=1 00:07:06.352 --rc genhtml_legend=1 00:07:06.352 --rc geninfo_all_blocks=1 00:07:06.352 --rc geninfo_unexecuted_blocks=1 00:07:06.352 00:07:06.352 ' 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:06.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.352 --rc genhtml_branch_coverage=1 00:07:06.352 --rc genhtml_function_coverage=1 00:07:06.352 --rc genhtml_legend=1 00:07:06.352 --rc geninfo_all_blocks=1 00:07:06.352 --rc geninfo_unexecuted_blocks=1 00:07:06.352 00:07:06.352 ' 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:06.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.352 --rc genhtml_branch_coverage=1 00:07:06.352 --rc genhtml_function_coverage=1 00:07:06.352 --rc genhtml_legend=1 00:07:06.352 --rc geninfo_all_blocks=1 00:07:06.352 --rc geninfo_unexecuted_blocks=1 00:07:06.352 00:07:06.352 ' 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:06.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.352 --rc genhtml_branch_coverage=1 00:07:06.352 --rc genhtml_function_coverage=1 00:07:06.352 --rc genhtml_legend=1 00:07:06.352 --rc geninfo_all_blocks=1 00:07:06.352 --rc geninfo_unexecuted_blocks=1 00:07:06.352 00:07:06.352 ' 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.352 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:06.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:06.353 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:06.353 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:06.353 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:06.353 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:06.353 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:06.353 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:06.353 00:44:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:06.353 00:44:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.353 00:44:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.353 00:44:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:06.353 ************************************ 00:07:06.353 START TEST nvmf_abort 00:07:06.353 ************************************ 00:07:06.353 00:44:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:06.353 * Looking for test storage... 00:07:06.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.353 00:44:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:06.353 00:44:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:07:06.353 00:44:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:06.612 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:06.612 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.612 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.612 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.612 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:06.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.613 --rc genhtml_branch_coverage=1 00:07:06.613 --rc genhtml_function_coverage=1 00:07:06.613 --rc genhtml_legend=1 00:07:06.613 --rc geninfo_all_blocks=1 00:07:06.613 --rc geninfo_unexecuted_blocks=1 00:07:06.613 00:07:06.613 ' 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:06.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.613 --rc genhtml_branch_coverage=1 00:07:06.613 --rc genhtml_function_coverage=1 00:07:06.613 --rc genhtml_legend=1 00:07:06.613 --rc geninfo_all_blocks=1 00:07:06.613 --rc geninfo_unexecuted_blocks=1 00:07:06.613 00:07:06.613 ' 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:06.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.613 --rc genhtml_branch_coverage=1 00:07:06.613 --rc genhtml_function_coverage=1 00:07:06.613 --rc genhtml_legend=1 00:07:06.613 --rc geninfo_all_blocks=1 00:07:06.613 --rc geninfo_unexecuted_blocks=1 00:07:06.613 00:07:06.613 ' 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:06.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.613 --rc genhtml_branch_coverage=1 00:07:06.613 --rc genhtml_function_coverage=1 00:07:06.613 --rc genhtml_legend=1 00:07:06.613 --rc geninfo_all_blocks=1 00:07:06.613 --rc geninfo_unexecuted_blocks=1 00:07:06.613 00:07:06.613 ' 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.613 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:06.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:06.614 00:44:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:08.569 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:08.569 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:08.569 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:08.570 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:08.570 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:08.570 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:08.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:07:08.829 00:07:08.829 --- 10.0.0.2 ping statistics --- 00:07:08.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.829 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:08.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:07:08.829 00:07:08.829 --- 10.0.0.1 ping statistics --- 00:07:08.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.829 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:08.829 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2838010 00:07:08.830 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:08.830 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2838010 00:07:08.830 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2838010 ']' 00:07:08.830 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.830 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.830 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.830 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.830 00:44:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:08.830 [2024-12-06 00:44:41.500245] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:07:08.830 [2024-12-06 00:44:41.500375] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.100 [2024-12-06 00:44:41.658706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.100 [2024-12-06 00:44:41.803042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.100 [2024-12-06 00:44:41.803144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.100 [2024-12-06 00:44:41.803170] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.100 [2024-12-06 00:44:41.803195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.100 [2024-12-06 00:44:41.803215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.100 [2024-12-06 00:44:41.805941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.100 [2024-12-06 00:44:41.805994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.100 [2024-12-06 00:44:41.806001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.037 [2024-12-06 00:44:42.479770] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.037 Malloc0 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.037 Delay0 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.037 [2024-12-06 00:44:42.604452] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.037 00:44:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:10.296 [2024-12-06 00:44:42.772431] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:12.826 Initializing NVMe Controllers 00:07:12.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:12.826 controller IO queue size 128 less than required 00:07:12.826 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:12.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:12.826 Initialization complete. Launching workers. 00:07:12.826 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 22023 00:07:12.826 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 22080, failed to submit 66 00:07:12.826 success 22023, unsuccessful 57, failed 0 00:07:12.826 00:44:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:12.826 00:44:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.826 00:44:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.826 00:44:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.826 00:44:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:12.826 00:44:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:12.826 00:44:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:12.826 00:44:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:12.826 00:44:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:12.826 00:44:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:12.826 00:44:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:12.826 00:44:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:12.826 rmmod nvme_tcp 00:07:12.826 rmmod nvme_fabrics 00:07:12.826 rmmod nvme_keyring 00:07:12.826 00:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:12.826 00:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:12.826 00:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:12.826 00:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2838010 ']' 00:07:12.827 00:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2838010 00:07:12.827 00:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2838010 ']' 00:07:12.827 00:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2838010 00:07:12.827 00:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:12.827 00:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.827 00:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2838010 00:07:12.827 00:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:12.827 00:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:12.827 00:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2838010' 00:07:12.827 killing process with pid 2838010 00:07:12.827 00:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2838010 00:07:12.827 00:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2838010 00:07:13.760 00:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:13.760 00:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:13.760 00:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:13.760 00:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:13.760 00:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:13.760 00:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:13.760 00:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:13.760 00:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:13.760 00:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:13.760 00:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.760 00:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.760 00:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.664 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:15.664 00:07:15.664 real 0m9.422s 00:07:15.664 user 0m15.641s 00:07:15.664 sys 0m2.849s 00:07:15.664 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.664 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.664 ************************************ 00:07:15.664 END TEST nvmf_abort 00:07:15.664 ************************************ 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:15.922 ************************************ 00:07:15.922 START TEST nvmf_ns_hotplug_stress 00:07:15.922 ************************************ 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:15.922 * Looking for test storage... 00:07:15.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:15.922 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:15.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.923 --rc genhtml_branch_coverage=1 00:07:15.923 --rc genhtml_function_coverage=1 00:07:15.923 --rc genhtml_legend=1 00:07:15.923 --rc geninfo_all_blocks=1 00:07:15.923 --rc geninfo_unexecuted_blocks=1 00:07:15.923 00:07:15.923 ' 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:15.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.923 --rc genhtml_branch_coverage=1 00:07:15.923 --rc genhtml_function_coverage=1 00:07:15.923 --rc genhtml_legend=1 00:07:15.923 --rc geninfo_all_blocks=1 00:07:15.923 --rc geninfo_unexecuted_blocks=1 00:07:15.923 00:07:15.923 ' 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:15.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.923 --rc genhtml_branch_coverage=1 00:07:15.923 --rc genhtml_function_coverage=1 00:07:15.923 --rc genhtml_legend=1 00:07:15.923 --rc geninfo_all_blocks=1 00:07:15.923 --rc geninfo_unexecuted_blocks=1 00:07:15.923 00:07:15.923 ' 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:15.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.923 --rc genhtml_branch_coverage=1 00:07:15.923 --rc genhtml_function_coverage=1 00:07:15.923 --rc genhtml_legend=1 00:07:15.923 --rc geninfo_all_blocks=1 00:07:15.923 --rc geninfo_unexecuted_blocks=1 00:07:15.923 00:07:15.923 ' 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:15.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:15.923 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:15.924 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:15.924 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:15.924 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:15.924 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:15.924 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:15.924 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.924 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:15.924 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:15.924 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:15.924 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.924 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.924 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.924 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:15.924 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:15.924 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:15.924 00:44:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:18.454 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:18.454 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:18.455 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:18.455 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:18.455 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:18.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:07:18.455 00:07:18.455 --- 10.0.0.2 ping statistics --- 00:07:18.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.455 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:18.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:07:18.455 00:07:18.455 --- 10.0.0.1 ping statistics --- 00:07:18.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.455 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2840637 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2840637 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2840637 ']' 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.455 00:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:18.455 [2024-12-06 00:44:50.943301] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:07:18.455 [2024-12-06 00:44:50.943434] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.455 [2024-12-06 00:44:51.099808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.714 [2024-12-06 00:44:51.244409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.714 [2024-12-06 00:44:51.244540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.714 [2024-12-06 00:44:51.244568] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.714 [2024-12-06 00:44:51.244596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.714 [2024-12-06 00:44:51.244616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.714 [2024-12-06 00:44:51.247380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.714 [2024-12-06 00:44:51.247436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.714 [2024-12-06 00:44:51.247441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.280 00:44:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.280 00:44:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:19.280 00:44:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:19.280 00:44:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:19.280 00:44:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:19.280 00:44:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.280 00:44:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:19.280 00:44:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:19.538 [2024-12-06 00:44:52.218574] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.538 00:44:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:20.103 00:44:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.104 [2024-12-06 00:44:52.752665] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.104 00:44:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:20.362 00:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:20.928 Malloc0 00:07:20.928 00:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:20.928 Delay0 00:07:21.185 00:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.443 00:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:21.701 NULL1 00:07:21.701 00:44:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:21.959 00:44:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2841068 00:07:21.959 00:44:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:21.959 00:44:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:21.959 00:44:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.217 00:44:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.476 00:44:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:22.476 00:44:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:22.733 true 00:07:22.733 00:44:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:22.733 00:44:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.990 00:44:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.248 00:44:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:23.248 00:44:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:23.505 true 00:07:23.505 00:44:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:23.505 00:44:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.438 Read completed with error (sct=0, sc=11) 00:07:24.438 00:44:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.438 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.438 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.696 00:44:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:24.696 00:44:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:24.954 true 00:07:24.954 00:44:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:24.954 00:44:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.211 00:44:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.469 00:44:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:25.469 00:44:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:25.726 true 00:07:25.726 00:44:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:25.726 00:44:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.983 00:44:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.240 00:44:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:26.240 00:44:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:26.496 true 00:07:26.496 00:44:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:26.496 00:44:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.866 00:45:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.866 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.866 00:45:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:27.866 00:45:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:28.124 true 00:07:28.124 00:45:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:28.124 00:45:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.381 00:45:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.638 00:45:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:28.638 00:45:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:28.895 true 00:07:28.895 00:45:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:28.895 00:45:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.829 00:45:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.087 00:45:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:30.087 00:45:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:30.344 true 00:07:30.344 00:45:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:30.344 00:45:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.602 00:45:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.859 00:45:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:30.859 00:45:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:31.116 true 00:07:31.116 00:45:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:31.116 00:45:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.374 00:45:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.631 00:45:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:31.631 00:45:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:31.889 true 00:07:31.889 00:45:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:31.889 00:45:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.820 00:45:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.078 00:45:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:33.078 00:45:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:33.335 true 00:07:33.335 00:45:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:33.335 00:45:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.592 00:45:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.849 00:45:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:33.849 00:45:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:34.107 true 00:07:34.364 00:45:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:34.364 00:45:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.928 00:45:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.184 00:45:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:35.184 00:45:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:35.748 true 00:07:35.748 00:45:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:35.748 00:45:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.005 00:45:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.263 00:45:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:36.263 00:45:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:36.521 true 00:07:36.521 00:45:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:36.521 00:45:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.778 00:45:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.036 00:45:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:37.036 00:45:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:37.293 true 00:07:37.293 00:45:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:37.293 00:45:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.225 00:45:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.482 00:45:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:38.482 00:45:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:38.739 true 00:07:38.739 00:45:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:38.739 00:45:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.997 00:45:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.275 00:45:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:39.275 00:45:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:39.568 true 00:07:39.568 00:45:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:39.568 00:45:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.500 00:45:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.500 00:45:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:40.500 00:45:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:40.758 true 00:07:40.758 00:45:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:40.758 00:45:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.016 00:45:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.283 00:45:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:41.283 00:45:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:41.544 true 00:07:41.801 00:45:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:41.801 00:45:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.365 00:45:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.629 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.629 00:45:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:42.629 00:45:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:42.889 true 00:07:42.889 00:45:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:42.889 00:45:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.146 00:45:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.404 00:45:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:43.404 00:45:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:43.968 true 00:07:43.968 00:45:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:43.968 00:45:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.534 00:45:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.050 00:45:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:45.050 00:45:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:45.308 true 00:07:45.308 00:45:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:45.308 00:45:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.565 00:45:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.822 00:45:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:45.822 00:45:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:46.080 true 00:07:46.080 00:45:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:46.080 00:45:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.013 00:45:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.013 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.270 00:45:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:47.270 00:45:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:47.528 true 00:07:47.528 00:45:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:47.528 00:45:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.786 00:45:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.043 00:45:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:48.043 00:45:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:48.300 true 00:07:48.300 00:45:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:48.300 00:45:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.564 00:45:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.821 00:45:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:48.821 00:45:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:49.078 true 00:07:49.078 00:45:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:49.078 00:45:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.009 00:45:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.267 00:45:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:50.267 00:45:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:50.525 true 00:07:50.525 00:45:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:50.525 00:45:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.782 00:45:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.039 00:45:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:51.039 00:45:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:51.298 true 00:07:51.298 00:45:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:51.298 00:45:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.556 00:45:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.814 00:45:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:51.814 00:45:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:52.071 true 00:07:52.071 00:45:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:52.071 00:45:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.440 Initializing NVMe Controllers 00:07:53.440 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:53.440 Controller IO queue size 128, less than required. 00:07:53.440 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:53.440 Controller IO queue size 128, less than required. 00:07:53.440 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:53.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:53.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:53.440 Initialization complete. Launching workers. 00:07:53.440 ======================================================== 00:07:53.440 Latency(us) 00:07:53.440 Device Information : IOPS MiB/s Average min max 00:07:53.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 489.00 0.24 116777.66 3285.51 1015081.24 00:07:53.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6931.53 3.38 18408.05 4179.55 485376.76 00:07:53.440 ======================================================== 00:07:53.440 Total : 7420.53 3.62 24890.44 3285.51 1015081.24 00:07:53.440 00:07:53.440 00:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.440 00:45:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:53.440 00:45:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:53.698 true 00:07:53.698 00:45:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841068 00:07:53.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2841068) - No such process 00:07:53.698 00:45:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2841068 00:07:53.698 00:45:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.263 00:45:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.263 00:45:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:54.263 00:45:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:54.263 00:45:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:54.263 00:45:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:54.263 00:45:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:54.521 null0 00:07:54.521 00:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:54.521 00:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:54.521 00:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:54.777 null1 00:07:54.777 00:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:54.777 00:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:54.777 00:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:55.033 null2 00:07:55.291 00:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:55.291 00:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:55.291 00:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:55.548 null3 00:07:55.548 00:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:55.548 00:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:55.548 00:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:55.805 null4 00:07:55.805 00:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:55.805 00:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:55.805 00:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:56.063 null5 00:07:56.063 00:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:56.063 00:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:56.063 00:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:56.320 null6 00:07:56.320 00:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:56.320 00:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:56.320 00:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:56.579 null7 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2845902 2845903 2845905 2845907 2845909 2845911 2845913 2845915 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.579 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:56.838 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:56.838 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:56.838 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:56.838 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.838 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:56.838 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.838 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.838 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.096 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.354 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:57.354 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.354 00:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:57.354 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:57.354 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.354 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.354 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:57.354 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:57.612 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.612 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.612 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.612 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.612 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.612 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:57.612 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.612 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.612 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:57.612 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.612 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.612 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:57.612 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.612 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.612 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.612 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.612 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.612 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.612 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.612 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:57.612 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.870 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.870 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.870 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:58.128 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:58.128 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:58.128 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:58.128 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:58.128 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:58.128 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.128 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:58.128 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:58.385 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.385 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.385 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:58.386 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.386 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.386 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:58.386 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.386 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.386 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:58.386 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.386 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.386 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:58.386 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.386 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.386 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:58.386 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.386 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.386 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:58.386 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.386 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.386 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:58.386 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.386 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.386 00:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:58.643 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:58.643 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:58.643 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:58.643 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:58.643 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.644 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:58.644 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:58.644 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.901 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:59.158 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:59.158 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:59.158 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:59.159 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.159 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:59.159 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:59.159 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:59.159 00:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:59.416 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.416 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.416 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:59.416 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.416 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.416 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:59.416 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.416 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.416 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:59.416 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.416 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.416 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:59.416 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.416 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.416 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:59.674 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.674 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.674 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:59.674 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.674 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.674 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:59.674 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.674 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.674 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:59.932 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:59.932 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:59.932 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:59.932 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:59.932 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:59.932 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:59.932 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:59.932 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.190 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:00.448 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:00.448 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:00.448 00:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:00.448 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:00.448 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:00.448 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:00.448 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.448 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.706 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:00.966 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:00.966 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:00.966 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:00.966 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:00.966 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:00.966 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:00.966 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:00.966 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.255 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.255 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.255 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.255 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.255 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.255 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.255 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.255 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.255 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.255 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.255 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.255 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.255 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.255 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.255 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.255 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.255 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.255 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.514 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.514 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.514 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.514 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.514 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.514 00:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.514 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.514 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.514 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.514 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.514 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.514 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.772 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.772 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.030 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.030 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.030 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.030 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.030 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.030 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.030 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.030 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.030 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.030 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.030 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.031 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.031 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.031 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.031 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.031 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.031 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.031 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.031 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.031 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.031 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.031 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.031 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.031 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.288 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.288 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.288 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.288 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.288 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.288 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.288 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.288 00:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:02.546 rmmod nvme_tcp 00:08:02.546 rmmod nvme_fabrics 00:08:02.546 rmmod nvme_keyring 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2840637 ']' 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2840637 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2840637 ']' 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2840637 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2840637 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2840637' 00:08:02.546 killing process with pid 2840637 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2840637 00:08:02.546 00:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2840637 00:08:03.917 00:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:03.917 00:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:03.917 00:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:03.917 00:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:03.918 00:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:03.918 00:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:03.918 00:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:03.918 00:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:03.918 00:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:03.918 00:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.918 00:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:03.918 00:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.817 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:05.817 00:08:05.817 real 0m50.029s 00:08:05.817 user 3m49.690s 00:08:05.817 sys 0m16.054s 00:08:05.817 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.818 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:05.818 ************************************ 00:08:05.818 END TEST nvmf_ns_hotplug_stress 00:08:05.818 ************************************ 00:08:05.818 00:45:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:05.818 00:45:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:05.818 00:45:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.818 00:45:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:05.818 ************************************ 00:08:05.818 START TEST nvmf_delete_subsystem 00:08:05.818 ************************************ 00:08:05.818 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:06.076 * Looking for test storage... 00:08:06.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:06.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.076 --rc genhtml_branch_coverage=1 00:08:06.076 --rc genhtml_function_coverage=1 00:08:06.076 --rc genhtml_legend=1 00:08:06.076 --rc geninfo_all_blocks=1 00:08:06.076 --rc geninfo_unexecuted_blocks=1 00:08:06.076 00:08:06.076 ' 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:06.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.076 --rc genhtml_branch_coverage=1 00:08:06.076 --rc genhtml_function_coverage=1 00:08:06.076 --rc genhtml_legend=1 00:08:06.076 --rc geninfo_all_blocks=1 00:08:06.076 --rc geninfo_unexecuted_blocks=1 00:08:06.076 00:08:06.076 ' 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:06.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.076 --rc genhtml_branch_coverage=1 00:08:06.076 --rc genhtml_function_coverage=1 00:08:06.076 --rc genhtml_legend=1 00:08:06.076 --rc geninfo_all_blocks=1 00:08:06.076 --rc geninfo_unexecuted_blocks=1 00:08:06.076 00:08:06.076 ' 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:06.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.076 --rc genhtml_branch_coverage=1 00:08:06.076 --rc genhtml_function_coverage=1 00:08:06.076 --rc genhtml_legend=1 00:08:06.076 --rc geninfo_all_blocks=1 00:08:06.076 --rc geninfo_unexecuted_blocks=1 00:08:06.076 00:08:06.076 ' 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.076 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:06.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:06.077 00:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:07.977 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:07.977 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:07.977 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:07.977 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.977 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:08.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:08:08.236 00:08:08.236 --- 10.0.0.2 ping statistics --- 00:08:08.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.236 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:08:08.236 00:08:08.236 --- 10.0.0.1 ping statistics --- 00:08:08.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.236 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2848833 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2848833 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2848833 ']' 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.236 00:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.236 [2024-12-06 00:45:40.930838] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:08:08.236 [2024-12-06 00:45:40.930978] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.493 [2024-12-06 00:45:41.070915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:08.752 [2024-12-06 00:45:41.205565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.752 [2024-12-06 00:45:41.205648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.752 [2024-12-06 00:45:41.205674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.752 [2024-12-06 00:45:41.205698] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.752 [2024-12-06 00:45:41.205718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.752 [2024-12-06 00:45:41.208259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.752 [2024-12-06 00:45:41.208260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.318 [2024-12-06 00:45:41.908621] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.318 [2024-12-06 00:45:41.926463] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.318 NULL1 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.318 Delay0 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2848986 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:09.318 00:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:09.576 [2024-12-06 00:45:42.070888] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:11.471 00:45:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:11.471 00:45:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.471 00:45:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 starting I/O failed: -6 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 starting I/O failed: -6 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 starting I/O failed: -6 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 starting I/O failed: -6 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 starting I/O failed: -6 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 starting I/O failed: -6 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 starting I/O failed: -6 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 starting I/O failed: -6 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 starting I/O failed: -6 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 starting I/O failed: -6 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 starting I/O failed: -6 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 starting I/O failed: -6 00:08:11.471 [2024-12-06 00:45:44.128914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(6) to be set 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 starting I/O failed: -6 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 starting I/O failed: -6 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 starting I/O failed: -6 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 starting I/O failed: -6 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 starting I/O failed: -6 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 starting I/O failed: -6 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 starting I/O failed: -6 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Write completed with error (sct=0, sc=8) 00:08:11.471 starting I/O failed: -6 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.471 Read completed with error (sct=0, sc=8) 00:08:11.472 Write completed with error (sct=0, sc=8) 00:08:11.472 Write completed with error (sct=0, sc=8) 00:08:11.472 starting I/O failed: -6 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Write completed with error (sct=0, sc=8) 00:08:11.472 starting I/O failed: -6 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Write completed with error (sct=0, sc=8) 00:08:11.472 Write completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 starting I/O failed: -6 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Write completed with error (sct=0, sc=8) 00:08:11.472 [2024-12-06 00:45:44.130783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:08:11.472 Write completed with error (sct=0, sc=8) 00:08:11.472 Write completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Write completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Write completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Write completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Write completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Write completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Write completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Write completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Write completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Write completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Write completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Write completed with error (sct=0, sc=8) 00:08:11.472 Write completed with error (sct=0, sc=8) 00:08:11.472 Write completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:11.472 Read completed with error (sct=0, sc=8) 00:08:12.402 [2024-12-06 00:45:45.089104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015c00 is same with the state(6) to be set 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Write completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Write completed with error (sct=0, sc=8) 00:08:12.659 Write completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Write completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Write completed with error (sct=0, sc=8) 00:08:12.659 Write completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 [2024-12-06 00:45:45.132003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Write completed with error (sct=0, sc=8) 00:08:12.659 Write completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Write completed with error (sct=0, sc=8) 00:08:12.659 Write completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Write completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Write completed with error (sct=0, sc=8) 00:08:12.659 Write completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Write completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Write completed with error (sct=0, sc=8) 00:08:12.659 Write completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 [2024-12-06 00:45:45.132974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Write completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.659 Read completed with error (sct=0, sc=8) 00:08:12.660 Write completed with error (sct=0, sc=8) 00:08:12.660 Write completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Write completed with error (sct=0, sc=8) 00:08:12.660 Write completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Write completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 [2024-12-06 00:45:45.133639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:08:12.660 Write completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Write completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Write completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Write completed with error (sct=0, sc=8) 00:08:12.660 Read completed with error (sct=0, sc=8) 00:08:12.660 Write completed with error (sct=0, sc=8) 00:08:12.660 Write completed with error (sct=0, sc=8) 00:08:12.660 Write completed with error (sct=0, sc=8) 00:08:12.660 Write completed with error (sct=0, sc=8) 00:08:12.660 [2024-12-06 00:45:45.134580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:08:12.660 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.660 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:12.660 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2848986 00:08:12.660 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:12.660 Initializing NVMe Controllers 00:08:12.660 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:12.660 Controller IO queue size 128, less than required. 00:08:12.660 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:12.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:12.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:12.660 Initialization complete. Launching workers. 00:08:12.660 ======================================================== 00:08:12.660 Latency(us) 00:08:12.660 Device Information : IOPS MiB/s Average min max 00:08:12.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.42 0.08 894444.48 1253.96 1015405.54 00:08:12.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.94 0.08 896799.69 693.25 1016147.40 00:08:12.660 ======================================================== 00:08:12.660 Total : 341.36 0.17 895616.96 693.25 1016147.40 00:08:12.660 00:08:12.660 [2024-12-06 00:45:45.140321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015c00 (9): Bad file descriptor 00:08:12.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2848986 00:08:13.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2848986) - No such process 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2848986 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2848986 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2848986 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.225 [2024-12-06 00:45:45.657824] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2849497 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849497 00:08:13.225 00:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:13.225 [2024-12-06 00:45:45.778248] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:13.484 00:45:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:13.484 00:45:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849497 00:08:13.484 00:45:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:14.048 00:45:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:14.048 00:45:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849497 00:08:14.048 00:45:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:14.641 00:45:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:14.641 00:45:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849497 00:08:14.641 00:45:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:15.209 00:45:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.209 00:45:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849497 00:08:15.209 00:45:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:15.786 00:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.786 00:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849497 00:08:15.786 00:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.044 00:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:16.044 00:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849497 00:08:16.044 00:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.302 Initializing NVMe Controllers 00:08:16.302 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:16.302 Controller IO queue size 128, less than required. 00:08:16.302 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:16.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:16.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:16.302 Initialization complete. Launching workers. 00:08:16.302 ======================================================== 00:08:16.302 Latency(us) 00:08:16.302 Device Information : IOPS MiB/s Average min max 00:08:16.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005851.63 1000264.10 1017293.41 00:08:16.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1007135.65 1000246.82 1047286.79 00:08:16.302 ======================================================== 00:08:16.302 Total : 256.00 0.12 1006493.64 1000246.82 1047286.79 00:08:16.302 00:08:16.560 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:16.560 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849497 00:08:16.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2849497) - No such process 00:08:16.560 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2849497 00:08:16.560 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:16.560 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:16.560 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:16.560 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:16.560 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:16.560 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:16.560 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:16.560 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:16.560 rmmod nvme_tcp 00:08:16.560 rmmod nvme_fabrics 00:08:16.560 rmmod nvme_keyring 00:08:16.560 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:16.560 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:16.560 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:16.560 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2848833 ']' 00:08:16.560 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2848833 00:08:16.560 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2848833 ']' 00:08:16.560 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2848833 00:08:16.560 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:16.560 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.560 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2848833 00:08:16.818 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.818 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.818 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2848833' 00:08:16.818 killing process with pid 2848833 00:08:16.818 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2848833 00:08:16.818 00:45:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2848833 00:08:18.194 00:45:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:18.194 00:45:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:18.194 00:45:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:18.194 00:45:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:18.194 00:45:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:18.194 00:45:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:18.194 00:45:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:18.194 00:45:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:18.194 00:45:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:18.194 00:45:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.194 00:45:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.194 00:45:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:20.096 00:08:20.096 real 0m14.028s 00:08:20.096 user 0m30.775s 00:08:20.096 sys 0m3.182s 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.096 ************************************ 00:08:20.096 END TEST nvmf_delete_subsystem 00:08:20.096 ************************************ 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.096 ************************************ 00:08:20.096 START TEST nvmf_host_management 00:08:20.096 ************************************ 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:20.096 * Looking for test storage... 00:08:20.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:20.096 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:20.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.097 --rc genhtml_branch_coverage=1 00:08:20.097 --rc genhtml_function_coverage=1 00:08:20.097 --rc genhtml_legend=1 00:08:20.097 --rc geninfo_all_blocks=1 00:08:20.097 --rc geninfo_unexecuted_blocks=1 00:08:20.097 00:08:20.097 ' 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:20.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.097 --rc genhtml_branch_coverage=1 00:08:20.097 --rc genhtml_function_coverage=1 00:08:20.097 --rc genhtml_legend=1 00:08:20.097 --rc geninfo_all_blocks=1 00:08:20.097 --rc geninfo_unexecuted_blocks=1 00:08:20.097 00:08:20.097 ' 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:20.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.097 --rc genhtml_branch_coverage=1 00:08:20.097 --rc genhtml_function_coverage=1 00:08:20.097 --rc genhtml_legend=1 00:08:20.097 --rc geninfo_all_blocks=1 00:08:20.097 --rc geninfo_unexecuted_blocks=1 00:08:20.097 00:08:20.097 ' 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:20.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.097 --rc genhtml_branch_coverage=1 00:08:20.097 --rc genhtml_function_coverage=1 00:08:20.097 --rc genhtml_legend=1 00:08:20.097 --rc geninfo_all_blocks=1 00:08:20.097 --rc geninfo_unexecuted_blocks=1 00:08:20.097 00:08:20.097 ' 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:20.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.097 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:20.098 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:20.098 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:20.098 00:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:22.628 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:22.628 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:22.628 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:22.628 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:22.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:08:22.628 00:08:22.628 --- 10.0.0.2 ping statistics --- 00:08:22.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.628 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:22.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:08:22.628 00:08:22.628 --- 10.0.0.1 ping statistics --- 00:08:22.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.628 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2851983 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2851983 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2851983 ']' 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.628 00:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.628 [2024-12-06 00:45:55.067895] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:08:22.629 [2024-12-06 00:45:55.068053] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.629 [2024-12-06 00:45:55.214354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.886 [2024-12-06 00:45:55.352811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.886 [2024-12-06 00:45:55.352903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.886 [2024-12-06 00:45:55.352931] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.886 [2024-12-06 00:45:55.352955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.886 [2024-12-06 00:45:55.352974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.886 [2024-12-06 00:45:55.355857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.886 [2024-12-06 00:45:55.355965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.886 [2024-12-06 00:45:55.356041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.886 [2024-12-06 00:45:55.356046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:23.452 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.452 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:23.452 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:23.452 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:23.452 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.452 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.452 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:23.452 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.452 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.452 [2024-12-06 00:45:56.049923] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.452 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.452 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:23.452 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:23.452 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.452 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:23.452 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:23.452 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:23.452 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.452 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.710 Malloc0 00:08:23.710 [2024-12-06 00:45:56.186939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2852162 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2852162 /var/tmp/bdevperf.sock 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2852162 ']' 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:23.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:23.710 { 00:08:23.710 "params": { 00:08:23.710 "name": "Nvme$subsystem", 00:08:23.710 "trtype": "$TEST_TRANSPORT", 00:08:23.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:23.710 "adrfam": "ipv4", 00:08:23.710 "trsvcid": "$NVMF_PORT", 00:08:23.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:23.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:23.710 "hdgst": ${hdgst:-false}, 00:08:23.710 "ddgst": ${ddgst:-false} 00:08:23.710 }, 00:08:23.710 "method": "bdev_nvme_attach_controller" 00:08:23.710 } 00:08:23.710 EOF 00:08:23.710 )") 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:23.710 00:45:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:23.710 "params": { 00:08:23.710 "name": "Nvme0", 00:08:23.710 "trtype": "tcp", 00:08:23.710 "traddr": "10.0.0.2", 00:08:23.710 "adrfam": "ipv4", 00:08:23.710 "trsvcid": "4420", 00:08:23.710 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:23.710 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:23.710 "hdgst": false, 00:08:23.710 "ddgst": false 00:08:23.710 }, 00:08:23.710 "method": "bdev_nvme_attach_controller" 00:08:23.710 }' 00:08:23.710 [2024-12-06 00:45:56.303558] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:08:23.710 [2024-12-06 00:45:56.303680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852162 ] 00:08:23.968 [2024-12-06 00:45:56.440853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.968 [2024-12-06 00:45:56.569377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.534 Running I/O for 10 seconds... 00:08:24.793 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.793 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:24.793 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:24.793 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.793 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.793 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.793 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:24.793 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:24.793 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:24.793 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:24.793 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:24.793 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:24.794 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:24.794 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:24.794 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:24.794 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:24.794 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.794 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.794 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.794 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=259 00:08:24.794 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 259 -ge 100 ']' 00:08:24.794 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:24.794 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:24.794 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:24.794 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:24.794 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.794 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.794 [2024-12-06 00:45:57.307008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.307105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.307152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.307188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.307227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.307251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.307276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.307299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.307324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.307346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.307371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.307393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.307417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.307456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.307483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.307505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.307530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.307552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.307576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.307599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.307623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.307644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.307669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.307690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.307715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.307737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.307761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.307783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.307808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.307844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.307883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.307905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.307930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.307952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.307977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.307999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.308024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.308045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.308069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.308101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.308126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.308147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.308181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.308202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.308227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.308249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.308274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.308296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.308321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.308343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.308368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.308389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.308414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.308435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.308464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.308486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.308510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.308532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.308556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.308578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.308602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.308625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.308649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.308671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.794 [2024-12-06 00:45:57.308696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.794 [2024-12-06 00:45:57.308717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.308742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.308764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.308788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.308810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.308842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.308870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.308894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.308915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.308939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.308961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.308985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.309008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.309033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.309058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.309084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.309113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.309138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.309160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.309193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.309215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.309239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.309261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.309285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.309306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.309331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.309369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.309394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.309415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.309439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.309459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.309482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.309504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.309528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.309549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.309572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.309596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.309620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.309642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.309670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.309693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.309717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.309739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.309763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.309791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.309841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.309873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.309898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.309921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.309945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.309967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.309992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.310014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.310040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.310062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.310087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.310117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.310157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.310186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.310210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.310231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 [2024-12-06 00:45:57.310255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.795 [2024-12-06 00:45:57.310277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.795 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.795 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:24.795 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.795 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.795 [2024-12-06 00:45:57.311828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:24.795 task offset: 43008 on job bdev=Nvme0n1 fails 00:08:24.795 00:08:24.795 Latency(us) 00:08:24.795 [2024-12-05T23:45:57.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.795 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:24.795 Job: Nvme0n1 ended in about 0.26 seconds with error 00:08:24.795 Verification LBA range: start 0x0 length 0x400 00:08:24.795 Nvme0n1 : 0.26 1229.27 76.83 245.85 0.00 41497.09 4247.70 40583.77 00:08:24.795 [2024-12-05T23:45:57.504Z] =================================================================================================================== 00:08:24.795 [2024-12-05T23:45:57.504Z] Total : 1229.27 76.83 245.85 0.00 41497.09 4247.70 40583.77 00:08:24.795 [2024-12-06 00:45:57.316929] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.795 [2024-12-06 00:45:57.316983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:08:24.795 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.795 00:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:24.795 [2024-12-06 00:45:57.325043] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:25.731 00:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2852162 00:08:25.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2852162) - No such process 00:08:25.731 00:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:25.731 00:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:25.731 00:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:25.731 00:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:25.731 00:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:25.731 00:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:25.731 00:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:25.731 00:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:25.731 { 00:08:25.731 "params": { 00:08:25.731 "name": "Nvme$subsystem", 00:08:25.731 "trtype": "$TEST_TRANSPORT", 00:08:25.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:25.731 "adrfam": "ipv4", 00:08:25.731 "trsvcid": "$NVMF_PORT", 00:08:25.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:25.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:25.731 "hdgst": ${hdgst:-false}, 00:08:25.731 "ddgst": ${ddgst:-false} 00:08:25.731 }, 00:08:25.731 "method": "bdev_nvme_attach_controller" 00:08:25.731 } 00:08:25.731 EOF 00:08:25.731 )") 00:08:25.731 00:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:25.731 00:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:25.731 00:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:25.731 00:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:25.731 "params": { 00:08:25.731 "name": "Nvme0", 00:08:25.731 "trtype": "tcp", 00:08:25.731 "traddr": "10.0.0.2", 00:08:25.731 "adrfam": "ipv4", 00:08:25.731 "trsvcid": "4420", 00:08:25.731 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:25.731 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:25.731 "hdgst": false, 00:08:25.731 "ddgst": false 00:08:25.731 }, 00:08:25.731 "method": "bdev_nvme_attach_controller" 00:08:25.731 }' 00:08:25.731 [2024-12-06 00:45:58.406728] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:08:25.731 [2024-12-06 00:45:58.406867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852439 ] 00:08:25.990 [2024-12-06 00:45:58.544885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.990 [2024-12-06 00:45:58.674491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.567 Running I/O for 1 seconds... 00:08:27.941 1344.00 IOPS, 84.00 MiB/s 00:08:27.941 Latency(us) 00:08:27.941 [2024-12-05T23:46:00.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.941 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:27.941 Verification LBA range: start 0x0 length 0x400 00:08:27.941 Nvme0n1 : 1.02 1383.98 86.50 0.00 0.00 45445.48 6990.51 40389.59 00:08:27.941 [2024-12-05T23:46:00.650Z] =================================================================================================================== 00:08:27.941 [2024-12-05T23:46:00.650Z] Total : 1383.98 86.50 0.00 0.00 45445.48 6990.51 40389.59 00:08:28.507 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:28.507 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:28.507 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:28.507 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:28.507 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:28.507 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:28.507 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:28.507 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:28.507 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:28.507 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:28.507 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:28.507 rmmod nvme_tcp 00:08:28.507 rmmod nvme_fabrics 00:08:28.507 rmmod nvme_keyring 00:08:28.507 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:28.507 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:28.507 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:28.507 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2851983 ']' 00:08:28.507 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2851983 00:08:28.507 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2851983 ']' 00:08:28.507 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2851983 00:08:28.507 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:28.508 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.508 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2851983 00:08:28.765 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:28.765 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:28.766 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2851983' 00:08:28.766 killing process with pid 2851983 00:08:28.766 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2851983 00:08:28.766 00:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2851983 00:08:29.701 [2024-12-06 00:46:02.401879] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:29.978 00:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:29.978 00:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:29.978 00:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:29.978 00:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:29.978 00:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:29.978 00:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:29.978 00:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:29.978 00:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:29.978 00:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:29.978 00:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.978 00:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.978 00:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.881 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:31.881 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:31.881 00:08:31.881 real 0m11.967s 00:08:31.881 user 0m32.387s 00:08:31.881 sys 0m3.202s 00:08:31.881 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.881 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.881 ************************************ 00:08:31.881 END TEST nvmf_host_management 00:08:31.881 ************************************ 00:08:31.881 00:46:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:31.881 00:46:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:31.881 00:46:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.881 00:46:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:32.139 ************************************ 00:08:32.139 START TEST nvmf_lvol 00:08:32.139 ************************************ 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:32.139 * Looking for test storage... 00:08:32.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:32.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.139 --rc genhtml_branch_coverage=1 00:08:32.139 --rc genhtml_function_coverage=1 00:08:32.139 --rc genhtml_legend=1 00:08:32.139 --rc geninfo_all_blocks=1 00:08:32.139 --rc geninfo_unexecuted_blocks=1 00:08:32.139 00:08:32.139 ' 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:32.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.139 --rc genhtml_branch_coverage=1 00:08:32.139 --rc genhtml_function_coverage=1 00:08:32.139 --rc genhtml_legend=1 00:08:32.139 --rc geninfo_all_blocks=1 00:08:32.139 --rc geninfo_unexecuted_blocks=1 00:08:32.139 00:08:32.139 ' 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:32.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.139 --rc genhtml_branch_coverage=1 00:08:32.139 --rc genhtml_function_coverage=1 00:08:32.139 --rc genhtml_legend=1 00:08:32.139 --rc geninfo_all_blocks=1 00:08:32.139 --rc geninfo_unexecuted_blocks=1 00:08:32.139 00:08:32.139 ' 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:32.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.139 --rc genhtml_branch_coverage=1 00:08:32.139 --rc genhtml_function_coverage=1 00:08:32.139 --rc genhtml_legend=1 00:08:32.139 --rc geninfo_all_blocks=1 00:08:32.139 --rc geninfo_unexecuted_blocks=1 00:08:32.139 00:08:32.139 ' 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:32.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:32.139 00:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:34.088 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:34.088 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.088 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:34.089 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:34.089 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.089 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:34.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:08:34.347 00:08:34.347 --- 10.0.0.2 ping statistics --- 00:08:34.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.347 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:08:34.347 00:08:34.347 --- 10.0.0.1 ping statistics --- 00:08:34.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.347 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2854806 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2854806 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2854806 ']' 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.347 00:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:34.347 [2024-12-06 00:46:06.932750] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:08:34.347 [2024-12-06 00:46:06.932914] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.605 [2024-12-06 00:46:07.077950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:34.605 [2024-12-06 00:46:07.213586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.605 [2024-12-06 00:46:07.213679] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.605 [2024-12-06 00:46:07.213705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.605 [2024-12-06 00:46:07.213730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.605 [2024-12-06 00:46:07.213751] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.605 [2024-12-06 00:46:07.216481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.605 [2024-12-06 00:46:07.216545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.605 [2024-12-06 00:46:07.216551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.535 00:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.535 00:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:35.535 00:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:35.535 00:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:35.535 00:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:35.535 00:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.535 00:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:35.535 [2024-12-06 00:46:08.185755] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.535 00:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:36.099 00:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:36.099 00:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:36.357 00:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:36.357 00:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:36.615 00:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:36.873 00:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=54c68824-fc74-485e-a75a-73a51381aba1 00:08:36.873 00:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 54c68824-fc74-485e-a75a-73a51381aba1 lvol 20 00:08:37.132 00:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=4a002337-04c4-40f1-aec2-f4e9bf8e0efb 00:08:37.132 00:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:37.390 00:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4a002337-04c4-40f1-aec2-f4e9bf8e0efb 00:08:37.648 00:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:37.906 [2024-12-06 00:46:10.557905] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.906 00:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:38.163 00:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2855359 00:08:38.163 00:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:38.163 00:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:39.571 00:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 4a002337-04c4-40f1-aec2-f4e9bf8e0efb MY_SNAPSHOT 00:08:39.571 00:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ff21152a-09ca-4af5-8e43-4168593f5447 00:08:39.571 00:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 4a002337-04c4-40f1-aec2-f4e9bf8e0efb 30 00:08:40.138 00:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ff21152a-09ca-4af5-8e43-4168593f5447 MY_CLONE 00:08:40.397 00:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a8225206-0b8b-4a0a-81a1-66add8439a09 00:08:40.397 00:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a8225206-0b8b-4a0a-81a1-66add8439a09 00:08:41.331 00:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2855359 00:08:49.454 Initializing NVMe Controllers 00:08:49.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:49.454 Controller IO queue size 128, less than required. 00:08:49.454 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:49.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:49.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:49.454 Initialization complete. Launching workers. 00:08:49.454 ======================================================== 00:08:49.454 Latency(us) 00:08:49.454 Device Information : IOPS MiB/s Average min max 00:08:49.454 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7972.70 31.14 16062.63 356.28 183227.32 00:08:49.454 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7974.70 31.15 16063.85 3173.40 141301.22 00:08:49.454 ======================================================== 00:08:49.454 Total : 15947.40 62.29 16063.24 356.28 183227.32 00:08:49.454 00:08:49.454 00:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:49.454 00:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4a002337-04c4-40f1-aec2-f4e9bf8e0efb 00:08:49.454 00:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 54c68824-fc74-485e-a75a-73a51381aba1 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:49.713 rmmod nvme_tcp 00:08:49.713 rmmod nvme_fabrics 00:08:49.713 rmmod nvme_keyring 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2854806 ']' 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2854806 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2854806 ']' 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2854806 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2854806 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2854806' 00:08:49.713 killing process with pid 2854806 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2854806 00:08:49.713 00:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2854806 00:08:51.089 00:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:51.089 00:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:51.089 00:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:51.089 00:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:51.089 00:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:51.089 00:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:51.089 00:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:51.089 00:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.089 00:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:51.089 00:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.089 00:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.089 00:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:53.623 00:08:53.623 real 0m21.110s 00:08:53.623 user 1m10.921s 00:08:53.623 sys 0m5.479s 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:53.623 ************************************ 00:08:53.623 END TEST nvmf_lvol 00:08:53.623 ************************************ 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.623 ************************************ 00:08:53.623 START TEST nvmf_lvs_grow 00:08:53.623 ************************************ 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:53.623 * Looking for test storage... 00:08:53.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:53.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.623 --rc genhtml_branch_coverage=1 00:08:53.623 --rc genhtml_function_coverage=1 00:08:53.623 --rc genhtml_legend=1 00:08:53.623 --rc geninfo_all_blocks=1 00:08:53.623 --rc geninfo_unexecuted_blocks=1 00:08:53.623 00:08:53.623 ' 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:53.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.623 --rc genhtml_branch_coverage=1 00:08:53.623 --rc genhtml_function_coverage=1 00:08:53.623 --rc genhtml_legend=1 00:08:53.623 --rc geninfo_all_blocks=1 00:08:53.623 --rc geninfo_unexecuted_blocks=1 00:08:53.623 00:08:53.623 ' 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:53.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.623 --rc genhtml_branch_coverage=1 00:08:53.623 --rc genhtml_function_coverage=1 00:08:53.623 --rc genhtml_legend=1 00:08:53.623 --rc geninfo_all_blocks=1 00:08:53.623 --rc geninfo_unexecuted_blocks=1 00:08:53.623 00:08:53.623 ' 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:53.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.623 --rc genhtml_branch_coverage=1 00:08:53.623 --rc genhtml_function_coverage=1 00:08:53.623 --rc genhtml_legend=1 00:08:53.623 --rc geninfo_all_blocks=1 00:08:53.623 --rc geninfo_unexecuted_blocks=1 00:08:53.623 00:08:53.623 ' 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.623 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:53.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:53.624 00:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:55.524 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:55.524 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:55.524 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:55.524 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:55.524 00:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:55.524 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:55.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:55.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:08:55.524 00:08:55.524 --- 10.0.0.2 ping statistics --- 00:08:55.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.524 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:08:55.524 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:55.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:55.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:08:55.524 00:08:55.524 --- 10.0.0.1 ping statistics --- 00:08:55.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.524 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:08:55.524 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:55.524 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:55.524 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:55.525 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:55.525 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:55.525 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:55.525 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:55.525 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:55.525 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:55.525 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:55.525 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:55.525 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:55.525 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:55.525 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2858774 00:08:55.525 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:55.525 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2858774 00:08:55.525 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2858774 ']' 00:08:55.525 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.525 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.525 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.525 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.525 00:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:55.525 [2024-12-06 00:46:28.137557] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:08:55.525 [2024-12-06 00:46:28.137725] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.782 [2024-12-06 00:46:28.292435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.782 [2024-12-06 00:46:28.432486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.782 [2024-12-06 00:46:28.432561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.782 [2024-12-06 00:46:28.432587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.782 [2024-12-06 00:46:28.432621] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.782 [2024-12-06 00:46:28.432641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.782 [2024-12-06 00:46:28.434350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.716 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.716 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:56.716 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:56.716 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:56.716 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.716 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.716 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:56.973 [2024-12-06 00:46:29.467843] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.974 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:56.974 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.974 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.974 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.974 ************************************ 00:08:56.974 START TEST lvs_grow_clean 00:08:56.974 ************************************ 00:08:56.974 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:56.974 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:56.974 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:56.974 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:56.974 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:56.974 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:56.974 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:56.974 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:56.974 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:56.974 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:57.232 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:57.232 00:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:57.489 00:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e10ab9a0-4fc3-4923-912b-8ebf2c5b26fe 00:08:57.489 00:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e10ab9a0-4fc3-4923-912b-8ebf2c5b26fe 00:08:57.489 00:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:57.747 00:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:57.747 00:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:57.747 00:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e10ab9a0-4fc3-4923-912b-8ebf2c5b26fe lvol 150 00:08:58.312 00:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c33d776f-8c46-4883-b259-0e7c5cc9e5ed 00:08:58.312 00:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:58.312 00:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:58.312 [2024-12-06 00:46:30.975753] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:58.312 [2024-12-06 00:46:30.975920] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:58.312 true 00:08:58.312 00:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e10ab9a0-4fc3-4923-912b-8ebf2c5b26fe 00:08:58.312 00:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:58.570 00:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:58.570 00:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:59.136 00:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c33d776f-8c46-4883-b259-0e7c5cc9e5ed 00:08:59.136 00:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:59.395 [2024-12-06 00:46:32.071428] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.395 00:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:59.962 00:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2859345 00:08:59.962 00:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:59.962 00:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2859345 /var/tmp/bdevperf.sock 00:08:59.962 00:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2859345 ']' 00:08:59.962 00:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:59.962 00:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:59.962 00:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.962 00:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:59.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:59.962 00:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.962 00:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:59.962 [2024-12-06 00:46:32.451487] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:08:59.962 [2024-12-06 00:46:32.451633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859345 ] 00:08:59.962 [2024-12-06 00:46:32.591937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.220 [2024-12-06 00:46:32.727528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.786 00:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.786 00:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:00.786 00:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:01.351 Nvme0n1 00:09:01.351 00:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:01.351 [ 00:09:01.351 { 00:09:01.351 "name": "Nvme0n1", 00:09:01.351 "aliases": [ 00:09:01.351 "c33d776f-8c46-4883-b259-0e7c5cc9e5ed" 00:09:01.351 ], 00:09:01.351 "product_name": "NVMe disk", 00:09:01.351 "block_size": 4096, 00:09:01.351 "num_blocks": 38912, 00:09:01.351 "uuid": "c33d776f-8c46-4883-b259-0e7c5cc9e5ed", 00:09:01.351 "numa_id": 0, 00:09:01.351 "assigned_rate_limits": { 00:09:01.351 "rw_ios_per_sec": 0, 00:09:01.351 "rw_mbytes_per_sec": 0, 00:09:01.351 "r_mbytes_per_sec": 0, 00:09:01.351 "w_mbytes_per_sec": 0 00:09:01.351 }, 00:09:01.351 "claimed": false, 00:09:01.351 "zoned": false, 00:09:01.351 "supported_io_types": { 00:09:01.351 "read": true, 00:09:01.351 "write": true, 00:09:01.351 "unmap": true, 00:09:01.351 "flush": true, 00:09:01.351 "reset": true, 00:09:01.351 "nvme_admin": true, 00:09:01.351 "nvme_io": true, 00:09:01.351 "nvme_io_md": false, 00:09:01.351 "write_zeroes": true, 00:09:01.351 "zcopy": false, 00:09:01.351 "get_zone_info": false, 00:09:01.351 "zone_management": false, 00:09:01.351 "zone_append": false, 00:09:01.351 "compare": true, 00:09:01.351 "compare_and_write": true, 00:09:01.351 "abort": true, 00:09:01.351 "seek_hole": false, 00:09:01.351 "seek_data": false, 00:09:01.351 "copy": true, 00:09:01.351 "nvme_iov_md": false 00:09:01.351 }, 00:09:01.351 "memory_domains": [ 00:09:01.351 { 00:09:01.351 "dma_device_id": "system", 00:09:01.352 "dma_device_type": 1 00:09:01.352 } 00:09:01.352 ], 00:09:01.352 "driver_specific": { 00:09:01.352 "nvme": [ 00:09:01.352 { 00:09:01.352 "trid": { 00:09:01.352 "trtype": "TCP", 00:09:01.352 "adrfam": "IPv4", 00:09:01.352 "traddr": "10.0.0.2", 00:09:01.352 "trsvcid": "4420", 00:09:01.352 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:01.352 }, 00:09:01.352 "ctrlr_data": { 00:09:01.352 "cntlid": 1, 00:09:01.352 "vendor_id": "0x8086", 00:09:01.352 "model_number": "SPDK bdev Controller", 00:09:01.352 "serial_number": "SPDK0", 00:09:01.352 "firmware_revision": "25.01", 00:09:01.352 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:01.352 "oacs": { 00:09:01.352 "security": 0, 00:09:01.352 "format": 0, 00:09:01.352 "firmware": 0, 00:09:01.352 "ns_manage": 0 00:09:01.352 }, 00:09:01.352 "multi_ctrlr": true, 00:09:01.352 "ana_reporting": false 00:09:01.352 }, 00:09:01.352 "vs": { 00:09:01.352 "nvme_version": "1.3" 00:09:01.352 }, 00:09:01.352 "ns_data": { 00:09:01.352 "id": 1, 00:09:01.352 "can_share": true 00:09:01.352 } 00:09:01.352 } 00:09:01.352 ], 00:09:01.352 "mp_policy": "active_passive" 00:09:01.352 } 00:09:01.352 } 00:09:01.352 ] 00:09:01.352 00:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2859507 00:09:01.352 00:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:01.352 00:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:01.610 Running I/O for 10 seconds... 00:09:02.581 Latency(us) 00:09:02.581 [2024-12-05T23:46:35.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.581 Nvme0n1 : 1.00 10860.00 42.42 0.00 0.00 0.00 0.00 0.00 00:09:02.581 [2024-12-05T23:46:35.290Z] =================================================================================================================== 00:09:02.581 [2024-12-05T23:46:35.290Z] Total : 10860.00 42.42 0.00 0.00 0.00 0.00 0.00 00:09:02.581 00:09:03.515 00:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e10ab9a0-4fc3-4923-912b-8ebf2c5b26fe 00:09:03.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.515 Nvme0n1 : 2.00 10986.00 42.91 0.00 0.00 0.00 0.00 0.00 00:09:03.515 [2024-12-05T23:46:36.224Z] =================================================================================================================== 00:09:03.515 [2024-12-05T23:46:36.224Z] Total : 10986.00 42.91 0.00 0.00 0.00 0.00 0.00 00:09:03.515 00:09:03.773 true 00:09:03.773 00:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e10ab9a0-4fc3-4923-912b-8ebf2c5b26fe 00:09:03.773 00:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:04.032 00:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:04.032 00:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:04.032 00:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2859507 00:09:04.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.600 Nvme0n1 : 3.00 11007.00 43.00 0.00 0.00 0.00 0.00 0.00 00:09:04.600 [2024-12-05T23:46:37.309Z] =================================================================================================================== 00:09:04.600 [2024-12-05T23:46:37.309Z] Total : 11007.00 43.00 0.00 0.00 0.00 0.00 0.00 00:09:04.600 00:09:05.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.536 Nvme0n1 : 4.00 11057.75 43.19 0.00 0.00 0.00 0.00 0.00 00:09:05.536 [2024-12-05T23:46:38.245Z] =================================================================================================================== 00:09:05.536 [2024-12-05T23:46:38.245Z] Total : 11057.75 43.19 0.00 0.00 0.00 0.00 0.00 00:09:05.536 00:09:06.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.914 Nvme0n1 : 5.00 11081.40 43.29 0.00 0.00 0.00 0.00 0.00 00:09:06.914 [2024-12-05T23:46:39.623Z] =================================================================================================================== 00:09:06.914 [2024-12-05T23:46:39.623Z] Total : 11081.40 43.29 0.00 0.00 0.00 0.00 0.00 00:09:06.914 00:09:07.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.482 Nvme0n1 : 6.00 11097.17 43.35 0.00 0.00 0.00 0.00 0.00 00:09:07.482 [2024-12-05T23:46:40.191Z] =================================================================================================================== 00:09:07.482 [2024-12-05T23:46:40.191Z] Total : 11097.17 43.35 0.00 0.00 0.00 0.00 0.00 00:09:07.482 00:09:08.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.857 Nvme0n1 : 7.00 11126.57 43.46 0.00 0.00 0.00 0.00 0.00 00:09:08.857 [2024-12-05T23:46:41.566Z] =================================================================================================================== 00:09:08.857 [2024-12-05T23:46:41.566Z] Total : 11126.57 43.46 0.00 0.00 0.00 0.00 0.00 00:09:08.857 00:09:09.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.791 Nvme0n1 : 8.00 11148.62 43.55 0.00 0.00 0.00 0.00 0.00 00:09:09.791 [2024-12-05T23:46:42.500Z] =================================================================================================================== 00:09:09.791 [2024-12-05T23:46:42.500Z] Total : 11148.62 43.55 0.00 0.00 0.00 0.00 0.00 00:09:09.791 00:09:10.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.729 Nvme0n1 : 9.00 11179.89 43.67 0.00 0.00 0.00 0.00 0.00 00:09:10.729 [2024-12-05T23:46:43.438Z] =================================================================================================================== 00:09:10.729 [2024-12-05T23:46:43.438Z] Total : 11179.89 43.67 0.00 0.00 0.00 0.00 0.00 00:09:10.729 00:09:11.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.666 Nvme0n1 : 10.00 11204.90 43.77 0.00 0.00 0.00 0.00 0.00 00:09:11.666 [2024-12-05T23:46:44.375Z] =================================================================================================================== 00:09:11.666 [2024-12-05T23:46:44.375Z] Total : 11204.90 43.77 0.00 0.00 0.00 0.00 0.00 00:09:11.666 00:09:11.666 00:09:11.666 Latency(us) 00:09:11.666 [2024-12-05T23:46:44.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.666 Nvme0n1 : 10.01 11211.79 43.80 0.00 0.00 11409.70 5752.60 29709.65 00:09:11.666 [2024-12-05T23:46:44.375Z] =================================================================================================================== 00:09:11.666 [2024-12-05T23:46:44.375Z] Total : 11211.79 43.80 0.00 0.00 11409.70 5752.60 29709.65 00:09:11.666 { 00:09:11.666 "results": [ 00:09:11.666 { 00:09:11.666 "job": "Nvme0n1", 00:09:11.666 "core_mask": "0x2", 00:09:11.666 "workload": "randwrite", 00:09:11.666 "status": "finished", 00:09:11.666 "queue_depth": 128, 00:09:11.666 "io_size": 4096, 00:09:11.666 "runtime": 10.005267, 00:09:11.666 "iops": 11211.794747706383, 00:09:11.666 "mibps": 43.79607323322806, 00:09:11.666 "io_failed": 0, 00:09:11.666 "io_timeout": 0, 00:09:11.666 "avg_latency_us": 11409.69547863347, 00:09:11.666 "min_latency_us": 5752.604444444444, 00:09:11.666 "max_latency_us": 29709.653333333332 00:09:11.666 } 00:09:11.666 ], 00:09:11.666 "core_count": 1 00:09:11.666 } 00:09:11.666 00:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2859345 00:09:11.666 00:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2859345 ']' 00:09:11.666 00:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2859345 00:09:11.666 00:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:11.666 00:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.666 00:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2859345 00:09:11.666 00:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:11.666 00:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:11.666 00:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2859345' 00:09:11.666 killing process with pid 2859345 00:09:11.666 00:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2859345 00:09:11.666 Received shutdown signal, test time was about 10.000000 seconds 00:09:11.666 00:09:11.666 Latency(us) 00:09:11.666 [2024-12-05T23:46:44.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.666 [2024-12-05T23:46:44.375Z] =================================================================================================================== 00:09:11.666 [2024-12-05T23:46:44.375Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:11.666 00:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2859345 00:09:12.601 00:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:12.860 00:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:13.117 00:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e10ab9a0-4fc3-4923-912b-8ebf2c5b26fe 00:09:13.117 00:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:13.374 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:13.374 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:13.374 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:13.632 [2024-12-06 00:46:46.307003] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:13.890 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e10ab9a0-4fc3-4923-912b-8ebf2c5b26fe 00:09:13.890 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:13.890 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e10ab9a0-4fc3-4923-912b-8ebf2c5b26fe 00:09:13.890 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.890 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:13.890 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.890 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:13.890 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.890 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:13.890 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.890 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:13.890 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e10ab9a0-4fc3-4923-912b-8ebf2c5b26fe 00:09:14.148 request: 00:09:14.148 { 00:09:14.148 "uuid": "e10ab9a0-4fc3-4923-912b-8ebf2c5b26fe", 00:09:14.148 "method": "bdev_lvol_get_lvstores", 00:09:14.148 "req_id": 1 00:09:14.148 } 00:09:14.148 Got JSON-RPC error response 00:09:14.148 response: 00:09:14.148 { 00:09:14.148 "code": -19, 00:09:14.148 "message": "No such device" 00:09:14.148 } 00:09:14.148 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:14.148 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:14.148 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:14.148 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:14.148 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:14.406 aio_bdev 00:09:14.406 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c33d776f-8c46-4883-b259-0e7c5cc9e5ed 00:09:14.406 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=c33d776f-8c46-4883-b259-0e7c5cc9e5ed 00:09:14.406 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.406 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:14.406 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.406 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.406 00:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:14.664 00:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c33d776f-8c46-4883-b259-0e7c5cc9e5ed -t 2000 00:09:14.923 [ 00:09:14.923 { 00:09:14.923 "name": "c33d776f-8c46-4883-b259-0e7c5cc9e5ed", 00:09:14.923 "aliases": [ 00:09:14.923 "lvs/lvol" 00:09:14.923 ], 00:09:14.923 "product_name": "Logical Volume", 00:09:14.923 "block_size": 4096, 00:09:14.923 "num_blocks": 38912, 00:09:14.923 "uuid": "c33d776f-8c46-4883-b259-0e7c5cc9e5ed", 00:09:14.923 "assigned_rate_limits": { 00:09:14.923 "rw_ios_per_sec": 0, 00:09:14.923 "rw_mbytes_per_sec": 0, 00:09:14.923 "r_mbytes_per_sec": 0, 00:09:14.923 "w_mbytes_per_sec": 0 00:09:14.923 }, 00:09:14.923 "claimed": false, 00:09:14.923 "zoned": false, 00:09:14.923 "supported_io_types": { 00:09:14.923 "read": true, 00:09:14.923 "write": true, 00:09:14.923 "unmap": true, 00:09:14.923 "flush": false, 00:09:14.923 "reset": true, 00:09:14.923 "nvme_admin": false, 00:09:14.923 "nvme_io": false, 00:09:14.923 "nvme_io_md": false, 00:09:14.923 "write_zeroes": true, 00:09:14.923 "zcopy": false, 00:09:14.923 "get_zone_info": false, 00:09:14.923 "zone_management": false, 00:09:14.923 "zone_append": false, 00:09:14.923 "compare": false, 00:09:14.923 "compare_and_write": false, 00:09:14.923 "abort": false, 00:09:14.923 "seek_hole": true, 00:09:14.923 "seek_data": true, 00:09:14.923 "copy": false, 00:09:14.923 "nvme_iov_md": false 00:09:14.923 }, 00:09:14.923 "driver_specific": { 00:09:14.923 "lvol": { 00:09:14.923 "lvol_store_uuid": "e10ab9a0-4fc3-4923-912b-8ebf2c5b26fe", 00:09:14.923 "base_bdev": "aio_bdev", 00:09:14.923 "thin_provision": false, 00:09:14.923 "num_allocated_clusters": 38, 00:09:14.923 "snapshot": false, 00:09:14.923 "clone": false, 00:09:14.923 "esnap_clone": false 00:09:14.923 } 00:09:14.923 } 00:09:14.923 } 00:09:14.923 ] 00:09:14.923 00:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:14.923 00:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e10ab9a0-4fc3-4923-912b-8ebf2c5b26fe 00:09:14.923 00:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:15.182 00:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:15.182 00:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e10ab9a0-4fc3-4923-912b-8ebf2c5b26fe 00:09:15.182 00:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:15.440 00:46:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:15.440 00:46:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c33d776f-8c46-4883-b259-0e7c5cc9e5ed 00:09:16.007 00:46:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e10ab9a0-4fc3-4923-912b-8ebf2c5b26fe 00:09:16.265 00:46:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:16.523 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:16.523 00:09:16.523 real 0m19.534s 00:09:16.523 user 0m19.308s 00:09:16.523 sys 0m1.928s 00:09:16.523 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.523 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:16.523 ************************************ 00:09:16.523 END TEST lvs_grow_clean 00:09:16.523 ************************************ 00:09:16.523 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:16.523 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:16.523 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.523 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:16.523 ************************************ 00:09:16.523 START TEST lvs_grow_dirty 00:09:16.523 ************************************ 00:09:16.523 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:16.523 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:16.523 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:16.523 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:16.523 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:16.523 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:16.523 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:16.523 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:16.523 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:16.523 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:16.780 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:16.780 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:17.038 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=df7fc966-0d77-4581-bb3a-025214db45cc 00:09:17.038 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df7fc966-0d77-4581-bb3a-025214db45cc 00:09:17.038 00:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:17.296 00:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:17.296 00:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:17.554 00:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u df7fc966-0d77-4581-bb3a-025214db45cc lvol 150 00:09:17.812 00:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ef5e7ae4-1812-464b-84bb-6981c05a6c0b 00:09:17.813 00:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:17.813 00:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:18.070 [2024-12-06 00:46:50.552912] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:18.070 [2024-12-06 00:46:50.553042] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:18.070 true 00:09:18.070 00:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df7fc966-0d77-4581-bb3a-025214db45cc 00:09:18.070 00:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:18.327 00:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:18.327 00:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:18.586 00:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ef5e7ae4-1812-464b-84bb-6981c05a6c0b 00:09:18.843 00:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:19.102 [2024-12-06 00:46:51.728738] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.102 00:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:19.361 00:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2861672 00:09:19.361 00:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:19.361 00:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:19.361 00:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2861672 /var/tmp/bdevperf.sock 00:09:19.361 00:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2861672 ']' 00:09:19.361 00:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:19.361 00:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.361 00:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:19.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:19.361 00:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.361 00:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:19.619 [2024-12-06 00:46:52.103548] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:09:19.619 [2024-12-06 00:46:52.103706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861672 ] 00:09:19.619 [2024-12-06 00:46:52.246977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.877 [2024-12-06 00:46:52.384677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.810 00:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.810 00:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:20.810 00:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:21.068 Nvme0n1 00:09:21.068 00:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:21.365 [ 00:09:21.365 { 00:09:21.365 "name": "Nvme0n1", 00:09:21.365 "aliases": [ 00:09:21.365 "ef5e7ae4-1812-464b-84bb-6981c05a6c0b" 00:09:21.365 ], 00:09:21.365 "product_name": "NVMe disk", 00:09:21.365 "block_size": 4096, 00:09:21.365 "num_blocks": 38912, 00:09:21.365 "uuid": "ef5e7ae4-1812-464b-84bb-6981c05a6c0b", 00:09:21.365 "numa_id": 0, 00:09:21.365 "assigned_rate_limits": { 00:09:21.365 "rw_ios_per_sec": 0, 00:09:21.365 "rw_mbytes_per_sec": 0, 00:09:21.365 "r_mbytes_per_sec": 0, 00:09:21.365 "w_mbytes_per_sec": 0 00:09:21.365 }, 00:09:21.365 "claimed": false, 00:09:21.365 "zoned": false, 00:09:21.365 "supported_io_types": { 00:09:21.365 "read": true, 00:09:21.365 "write": true, 00:09:21.365 "unmap": true, 00:09:21.365 "flush": true, 00:09:21.365 "reset": true, 00:09:21.365 "nvme_admin": true, 00:09:21.365 "nvme_io": true, 00:09:21.365 "nvme_io_md": false, 00:09:21.365 "write_zeroes": true, 00:09:21.365 "zcopy": false, 00:09:21.365 "get_zone_info": false, 00:09:21.365 "zone_management": false, 00:09:21.365 "zone_append": false, 00:09:21.365 "compare": true, 00:09:21.365 "compare_and_write": true, 00:09:21.365 "abort": true, 00:09:21.365 "seek_hole": false, 00:09:21.365 "seek_data": false, 00:09:21.365 "copy": true, 00:09:21.365 "nvme_iov_md": false 00:09:21.365 }, 00:09:21.365 "memory_domains": [ 00:09:21.365 { 00:09:21.365 "dma_device_id": "system", 00:09:21.365 "dma_device_type": 1 00:09:21.365 } 00:09:21.365 ], 00:09:21.365 "driver_specific": { 00:09:21.365 "nvme": [ 00:09:21.365 { 00:09:21.365 "trid": { 00:09:21.365 "trtype": "TCP", 00:09:21.365 "adrfam": "IPv4", 00:09:21.365 "traddr": "10.0.0.2", 00:09:21.365 "trsvcid": "4420", 00:09:21.365 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:21.365 }, 00:09:21.365 "ctrlr_data": { 00:09:21.365 "cntlid": 1, 00:09:21.365 "vendor_id": "0x8086", 00:09:21.365 "model_number": "SPDK bdev Controller", 00:09:21.365 "serial_number": "SPDK0", 00:09:21.365 "firmware_revision": "25.01", 00:09:21.365 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:21.365 "oacs": { 00:09:21.365 "security": 0, 00:09:21.365 "format": 0, 00:09:21.365 "firmware": 0, 00:09:21.365 "ns_manage": 0 00:09:21.365 }, 00:09:21.365 "multi_ctrlr": true, 00:09:21.365 "ana_reporting": false 00:09:21.365 }, 00:09:21.365 "vs": { 00:09:21.365 "nvme_version": "1.3" 00:09:21.365 }, 00:09:21.365 "ns_data": { 00:09:21.365 "id": 1, 00:09:21.365 "can_share": true 00:09:21.365 } 00:09:21.365 } 00:09:21.365 ], 00:09:21.365 "mp_policy": "active_passive" 00:09:21.365 } 00:09:21.365 } 00:09:21.365 ] 00:09:21.365 00:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2861940 00:09:21.365 00:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:21.365 00:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:21.365 Running I/O for 10 seconds... 00:09:22.299 Latency(us) 00:09:22.299 [2024-12-05T23:46:55.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.299 Nvme0n1 : 1.00 10669.00 41.68 0.00 0.00 0.00 0.00 0.00 00:09:22.299 [2024-12-05T23:46:55.008Z] =================================================================================================================== 00:09:22.299 [2024-12-05T23:46:55.008Z] Total : 10669.00 41.68 0.00 0.00 0.00 0.00 0.00 00:09:22.299 00:09:23.230 00:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u df7fc966-0d77-4581-bb3a-025214db45cc 00:09:23.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.230 Nvme0n1 : 2.00 10924.00 42.67 0.00 0.00 0.00 0.00 0.00 00:09:23.230 [2024-12-05T23:46:55.939Z] =================================================================================================================== 00:09:23.230 [2024-12-05T23:46:55.939Z] Total : 10924.00 42.67 0.00 0.00 0.00 0.00 0.00 00:09:23.230 00:09:23.488 true 00:09:23.488 00:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df7fc966-0d77-4581-bb3a-025214db45cc 00:09:23.488 00:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:23.746 00:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:23.746 00:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:23.746 00:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2861940 00:09:24.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.312 Nvme0n1 : 3.00 11009.00 43.00 0.00 0.00 0.00 0.00 0.00 00:09:24.312 [2024-12-05T23:46:57.021Z] =================================================================================================================== 00:09:24.312 [2024-12-05T23:46:57.021Z] Total : 11009.00 43.00 0.00 0.00 0.00 0.00 0.00 00:09:24.312 00:09:25.249 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.249 Nvme0n1 : 4.00 11059.25 43.20 0.00 0.00 0.00 0.00 0.00 00:09:25.249 [2024-12-05T23:46:57.958Z] =================================================================================================================== 00:09:25.249 [2024-12-05T23:46:57.958Z] Total : 11059.25 43.20 0.00 0.00 0.00 0.00 0.00 00:09:25.249 00:09:26.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.625 Nvme0n1 : 5.00 11108.00 43.39 0.00 0.00 0.00 0.00 0.00 00:09:26.625 [2024-12-05T23:46:59.334Z] =================================================================================================================== 00:09:26.625 [2024-12-05T23:46:59.334Z] Total : 11108.00 43.39 0.00 0.00 0.00 0.00 0.00 00:09:26.625 00:09:27.560 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.560 Nvme0n1 : 6.00 11140.50 43.52 0.00 0.00 0.00 0.00 0.00 00:09:27.560 [2024-12-05T23:47:00.269Z] =================================================================================================================== 00:09:27.560 [2024-12-05T23:47:00.269Z] Total : 11140.50 43.52 0.00 0.00 0.00 0.00 0.00 00:09:27.560 00:09:28.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.524 Nvme0n1 : 7.00 11145.57 43.54 0.00 0.00 0.00 0.00 0.00 00:09:28.524 [2024-12-05T23:47:01.233Z] =================================================================================================================== 00:09:28.524 [2024-12-05T23:47:01.233Z] Total : 11145.57 43.54 0.00 0.00 0.00 0.00 0.00 00:09:28.524 00:09:29.498 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.498 Nvme0n1 : 8.00 11173.38 43.65 0.00 0.00 0.00 0.00 0.00 00:09:29.498 [2024-12-05T23:47:02.207Z] =================================================================================================================== 00:09:29.498 [2024-12-05T23:47:02.207Z] Total : 11173.38 43.65 0.00 0.00 0.00 0.00 0.00 00:09:29.498 00:09:30.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.434 Nvme0n1 : 9.00 11173.67 43.65 0.00 0.00 0.00 0.00 0.00 00:09:30.434 [2024-12-05T23:47:03.143Z] =================================================================================================================== 00:09:30.434 [2024-12-05T23:47:03.143Z] Total : 11173.67 43.65 0.00 0.00 0.00 0.00 0.00 00:09:30.434 00:09:31.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.370 Nvme0n1 : 10.00 11180.40 43.67 0.00 0.00 0.00 0.00 0.00 00:09:31.370 [2024-12-05T23:47:04.079Z] =================================================================================================================== 00:09:31.370 [2024-12-05T23:47:04.080Z] Total : 11180.40 43.67 0.00 0.00 0.00 0.00 0.00 00:09:31.371 00:09:31.371 00:09:31.371 Latency(us) 00:09:31.371 [2024-12-05T23:47:04.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.371 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.371 Nvme0n1 : 10.01 11183.79 43.69 0.00 0.00 11438.21 5825.42 21748.24 00:09:31.371 [2024-12-05T23:47:04.080Z] =================================================================================================================== 00:09:31.371 [2024-12-05T23:47:04.080Z] Total : 11183.79 43.69 0.00 0.00 11438.21 5825.42 21748.24 00:09:31.371 { 00:09:31.371 "results": [ 00:09:31.371 { 00:09:31.371 "job": "Nvme0n1", 00:09:31.371 "core_mask": "0x2", 00:09:31.371 "workload": "randwrite", 00:09:31.371 "status": "finished", 00:09:31.371 "queue_depth": 128, 00:09:31.371 "io_size": 4096, 00:09:31.371 "runtime": 10.008413, 00:09:31.371 "iops": 11183.791076567284, 00:09:31.371 "mibps": 43.68668389284095, 00:09:31.371 "io_failed": 0, 00:09:31.371 "io_timeout": 0, 00:09:31.371 "avg_latency_us": 11438.213892085274, 00:09:31.371 "min_latency_us": 5825.422222222222, 00:09:31.371 "max_latency_us": 21748.242962962962 00:09:31.371 } 00:09:31.371 ], 00:09:31.371 "core_count": 1 00:09:31.371 } 00:09:31.371 00:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2861672 00:09:31.371 00:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2861672 ']' 00:09:31.371 00:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2861672 00:09:31.371 00:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:31.371 00:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.371 00:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2861672 00:09:31.371 00:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:31.371 00:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:31.371 00:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2861672' 00:09:31.371 killing process with pid 2861672 00:09:31.371 00:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2861672 00:09:31.371 Received shutdown signal, test time was about 10.000000 seconds 00:09:31.371 00:09:31.371 Latency(us) 00:09:31.371 [2024-12-05T23:47:04.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.371 [2024-12-05T23:47:04.080Z] =================================================================================================================== 00:09:31.371 [2024-12-05T23:47:04.080Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:31.371 00:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2861672 00:09:32.334 00:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:32.592 00:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:32.849 00:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df7fc966-0d77-4581-bb3a-025214db45cc 00:09:32.849 00:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:33.106 00:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:33.106 00:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:33.106 00:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2858774 00:09:33.106 00:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2858774 00:09:33.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2858774 Killed "${NVMF_APP[@]}" "$@" 00:09:33.364 00:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:33.364 00:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:33.364 00:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:33.364 00:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.364 00:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:33.364 00:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2863369 00:09:33.364 00:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:33.364 00:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2863369 00:09:33.364 00:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2863369 ']' 00:09:33.364 00:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.364 00:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.364 00:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.364 00:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.364 00:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:33.364 [2024-12-06 00:47:05.911680] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:09:33.364 [2024-12-06 00:47:05.911856] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.364 [2024-12-06 00:47:06.062374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.622 [2024-12-06 00:47:06.195798] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.622 [2024-12-06 00:47:06.195898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.622 [2024-12-06 00:47:06.195925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.622 [2024-12-06 00:47:06.195955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.622 [2024-12-06 00:47:06.195977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.622 [2024-12-06 00:47:06.197629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.188 00:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.188 00:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:34.188 00:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:34.188 00:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:34.188 00:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:34.188 00:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.188 00:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:34.754 [2024-12-06 00:47:07.158304] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:34.754 [2024-12-06 00:47:07.158547] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:34.754 [2024-12-06 00:47:07.158630] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:34.754 00:47:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:34.754 00:47:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ef5e7ae4-1812-464b-84bb-6981c05a6c0b 00:09:34.754 00:47:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ef5e7ae4-1812-464b-84bb-6981c05a6c0b 00:09:34.754 00:47:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.754 00:47:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:34.754 00:47:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.754 00:47:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.754 00:47:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:34.754 00:47:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ef5e7ae4-1812-464b-84bb-6981c05a6c0b -t 2000 00:09:35.320 [ 00:09:35.320 { 00:09:35.320 "name": "ef5e7ae4-1812-464b-84bb-6981c05a6c0b", 00:09:35.320 "aliases": [ 00:09:35.320 "lvs/lvol" 00:09:35.320 ], 00:09:35.320 "product_name": "Logical Volume", 00:09:35.320 "block_size": 4096, 00:09:35.320 "num_blocks": 38912, 00:09:35.320 "uuid": "ef5e7ae4-1812-464b-84bb-6981c05a6c0b", 00:09:35.320 "assigned_rate_limits": { 00:09:35.320 "rw_ios_per_sec": 0, 00:09:35.320 "rw_mbytes_per_sec": 0, 00:09:35.320 "r_mbytes_per_sec": 0, 00:09:35.320 "w_mbytes_per_sec": 0 00:09:35.320 }, 00:09:35.320 "claimed": false, 00:09:35.320 "zoned": false, 00:09:35.320 "supported_io_types": { 00:09:35.320 "read": true, 00:09:35.320 "write": true, 00:09:35.320 "unmap": true, 00:09:35.320 "flush": false, 00:09:35.320 "reset": true, 00:09:35.320 "nvme_admin": false, 00:09:35.320 "nvme_io": false, 00:09:35.320 "nvme_io_md": false, 00:09:35.320 "write_zeroes": true, 00:09:35.320 "zcopy": false, 00:09:35.320 "get_zone_info": false, 00:09:35.320 "zone_management": false, 00:09:35.320 "zone_append": false, 00:09:35.320 "compare": false, 00:09:35.320 "compare_and_write": false, 00:09:35.320 "abort": false, 00:09:35.320 "seek_hole": true, 00:09:35.320 "seek_data": true, 00:09:35.320 "copy": false, 00:09:35.320 "nvme_iov_md": false 00:09:35.320 }, 00:09:35.320 "driver_specific": { 00:09:35.320 "lvol": { 00:09:35.320 "lvol_store_uuid": "df7fc966-0d77-4581-bb3a-025214db45cc", 00:09:35.320 "base_bdev": "aio_bdev", 00:09:35.320 "thin_provision": false, 00:09:35.320 "num_allocated_clusters": 38, 00:09:35.320 "snapshot": false, 00:09:35.320 "clone": false, 00:09:35.320 "esnap_clone": false 00:09:35.320 } 00:09:35.320 } 00:09:35.320 } 00:09:35.320 ] 00:09:35.320 00:47:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:35.320 00:47:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df7fc966-0d77-4581-bb3a-025214db45cc 00:09:35.320 00:47:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:35.579 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:35.579 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df7fc966-0d77-4581-bb3a-025214db45cc 00:09:35.579 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:35.837 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:35.837 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:36.096 [2024-12-06 00:47:08.587221] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:36.096 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df7fc966-0d77-4581-bb3a-025214db45cc 00:09:36.096 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:36.096 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df7fc966-0d77-4581-bb3a-025214db45cc 00:09:36.096 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.096 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.096 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.096 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.096 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.096 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.096 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.096 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:36.096 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df7fc966-0d77-4581-bb3a-025214db45cc 00:09:36.354 request: 00:09:36.354 { 00:09:36.354 "uuid": "df7fc966-0d77-4581-bb3a-025214db45cc", 00:09:36.354 "method": "bdev_lvol_get_lvstores", 00:09:36.354 "req_id": 1 00:09:36.354 } 00:09:36.354 Got JSON-RPC error response 00:09:36.354 response: 00:09:36.354 { 00:09:36.354 "code": -19, 00:09:36.354 "message": "No such device" 00:09:36.354 } 00:09:36.354 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:36.354 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:36.354 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:36.354 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:36.354 00:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:36.612 aio_bdev 00:09:36.612 00:47:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ef5e7ae4-1812-464b-84bb-6981c05a6c0b 00:09:36.612 00:47:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ef5e7ae4-1812-464b-84bb-6981c05a6c0b 00:09:36.612 00:47:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.612 00:47:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:36.612 00:47:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.612 00:47:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.612 00:47:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:36.869 00:47:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ef5e7ae4-1812-464b-84bb-6981c05a6c0b -t 2000 00:09:37.126 [ 00:09:37.126 { 00:09:37.126 "name": "ef5e7ae4-1812-464b-84bb-6981c05a6c0b", 00:09:37.126 "aliases": [ 00:09:37.126 "lvs/lvol" 00:09:37.126 ], 00:09:37.126 "product_name": "Logical Volume", 00:09:37.126 "block_size": 4096, 00:09:37.126 "num_blocks": 38912, 00:09:37.126 "uuid": "ef5e7ae4-1812-464b-84bb-6981c05a6c0b", 00:09:37.126 "assigned_rate_limits": { 00:09:37.126 "rw_ios_per_sec": 0, 00:09:37.126 "rw_mbytes_per_sec": 0, 00:09:37.126 "r_mbytes_per_sec": 0, 00:09:37.126 "w_mbytes_per_sec": 0 00:09:37.126 }, 00:09:37.126 "claimed": false, 00:09:37.126 "zoned": false, 00:09:37.126 "supported_io_types": { 00:09:37.126 "read": true, 00:09:37.126 "write": true, 00:09:37.126 "unmap": true, 00:09:37.126 "flush": false, 00:09:37.126 "reset": true, 00:09:37.126 "nvme_admin": false, 00:09:37.126 "nvme_io": false, 00:09:37.126 "nvme_io_md": false, 00:09:37.126 "write_zeroes": true, 00:09:37.126 "zcopy": false, 00:09:37.126 "get_zone_info": false, 00:09:37.126 "zone_management": false, 00:09:37.126 "zone_append": false, 00:09:37.126 "compare": false, 00:09:37.126 "compare_and_write": false, 00:09:37.126 "abort": false, 00:09:37.126 "seek_hole": true, 00:09:37.126 "seek_data": true, 00:09:37.126 "copy": false, 00:09:37.126 "nvme_iov_md": false 00:09:37.126 }, 00:09:37.126 "driver_specific": { 00:09:37.126 "lvol": { 00:09:37.126 "lvol_store_uuid": "df7fc966-0d77-4581-bb3a-025214db45cc", 00:09:37.126 "base_bdev": "aio_bdev", 00:09:37.126 "thin_provision": false, 00:09:37.126 "num_allocated_clusters": 38, 00:09:37.126 "snapshot": false, 00:09:37.126 "clone": false, 00:09:37.126 "esnap_clone": false 00:09:37.126 } 00:09:37.126 } 00:09:37.126 } 00:09:37.126 ] 00:09:37.126 00:47:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:37.126 00:47:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df7fc966-0d77-4581-bb3a-025214db45cc 00:09:37.126 00:47:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:37.384 00:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:37.384 00:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df7fc966-0d77-4581-bb3a-025214db45cc 00:09:37.384 00:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:37.950 00:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:37.950 00:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ef5e7ae4-1812-464b-84bb-6981c05a6c0b 00:09:38.208 00:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u df7fc966-0d77-4581-bb3a-025214db45cc 00:09:38.467 00:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:38.725 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:38.725 00:09:38.725 real 0m22.145s 00:09:38.725 user 0m56.128s 00:09:38.725 sys 0m4.588s 00:09:38.725 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.725 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:38.725 ************************************ 00:09:38.725 END TEST lvs_grow_dirty 00:09:38.725 ************************************ 00:09:38.725 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:38.725 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:38.725 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:38.725 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:38.725 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:38.725 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:38.726 nvmf_trace.0 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:38.726 rmmod nvme_tcp 00:09:38.726 rmmod nvme_fabrics 00:09:38.726 rmmod nvme_keyring 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2863369 ']' 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2863369 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2863369 ']' 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2863369 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2863369 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2863369' 00:09:38.726 killing process with pid 2863369 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2863369 00:09:38.726 00:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2863369 00:09:40.100 00:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:40.101 00:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:40.101 00:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:40.101 00:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:40.101 00:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:40.101 00:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:40.101 00:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:40.101 00:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:40.101 00:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:40.101 00:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.101 00:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.101 00:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.002 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:42.002 00:09:42.002 real 0m48.864s 00:09:42.002 user 1m23.499s 00:09:42.002 sys 0m8.655s 00:09:42.002 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.002 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:42.002 ************************************ 00:09:42.002 END TEST nvmf_lvs_grow 00:09:42.002 ************************************ 00:09:42.002 00:47:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:42.002 00:47:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:42.002 00:47:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.002 00:47:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:42.002 ************************************ 00:09:42.002 START TEST nvmf_bdev_io_wait 00:09:42.002 ************************************ 00:09:42.002 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:42.262 * Looking for test storage... 00:09:42.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.262 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:42.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.262 --rc genhtml_branch_coverage=1 00:09:42.262 --rc genhtml_function_coverage=1 00:09:42.262 --rc genhtml_legend=1 00:09:42.262 --rc geninfo_all_blocks=1 00:09:42.262 --rc geninfo_unexecuted_blocks=1 00:09:42.262 00:09:42.262 ' 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:42.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.263 --rc genhtml_branch_coverage=1 00:09:42.263 --rc genhtml_function_coverage=1 00:09:42.263 --rc genhtml_legend=1 00:09:42.263 --rc geninfo_all_blocks=1 00:09:42.263 --rc geninfo_unexecuted_blocks=1 00:09:42.263 00:09:42.263 ' 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:42.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.263 --rc genhtml_branch_coverage=1 00:09:42.263 --rc genhtml_function_coverage=1 00:09:42.263 --rc genhtml_legend=1 00:09:42.263 --rc geninfo_all_blocks=1 00:09:42.263 --rc geninfo_unexecuted_blocks=1 00:09:42.263 00:09:42.263 ' 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:42.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.263 --rc genhtml_branch_coverage=1 00:09:42.263 --rc genhtml_function_coverage=1 00:09:42.263 --rc genhtml_legend=1 00:09:42.263 --rc geninfo_all_blocks=1 00:09:42.263 --rc geninfo_unexecuted_blocks=1 00:09:42.263 00:09:42.263 ' 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:42.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:42.263 00:47:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.843 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:44.843 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:44.843 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:44.843 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:44.843 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:44.843 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:44.843 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:44.843 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:44.843 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:44.843 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:44.843 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:44.843 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:44.843 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:44.843 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:44.844 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:44.844 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:44.844 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:44.844 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:44.844 00:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:44.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:09:44.844 00:09:44.844 --- 10.0.0.2 ping statistics --- 00:09:44.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.844 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:44.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:09:44.844 00:09:44.844 --- 10.0.0.1 ping statistics --- 00:09:44.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.844 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2866152 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2866152 00:09:44.844 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:44.845 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2866152 ']' 00:09:44.845 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.845 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.845 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.845 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.845 00:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.845 [2024-12-06 00:47:17.199976] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:09:44.845 [2024-12-06 00:47:17.200131] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.845 [2024-12-06 00:47:17.371139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:44.845 [2024-12-06 00:47:17.515690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.845 [2024-12-06 00:47:17.515796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.845 [2024-12-06 00:47:17.515831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.845 [2024-12-06 00:47:17.515858] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.845 [2024-12-06 00:47:17.515878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.845 [2024-12-06 00:47:17.518740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.845 [2024-12-06 00:47:17.518810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.845 [2024-12-06 00:47:17.518899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.845 [2024-12-06 00:47:17.518918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.774 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.774 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:45.774 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:45.774 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:45.774 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.774 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.774 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:45.774 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.774 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.774 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.774 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:45.774 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.774 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.774 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.774 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:45.774 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.774 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.774 [2024-12-06 00:47:18.463514] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.774 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.774 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:45.774 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.774 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:46.032 Malloc0 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:46.032 [2024-12-06 00:47:18.570534] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2866360 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2866362 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:46.032 { 00:09:46.032 "params": { 00:09:46.032 "name": "Nvme$subsystem", 00:09:46.032 "trtype": "$TEST_TRANSPORT", 00:09:46.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:46.032 "adrfam": "ipv4", 00:09:46.032 "trsvcid": "$NVMF_PORT", 00:09:46.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:46.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:46.032 "hdgst": ${hdgst:-false}, 00:09:46.032 "ddgst": ${ddgst:-false} 00:09:46.032 }, 00:09:46.032 "method": "bdev_nvme_attach_controller" 00:09:46.032 } 00:09:46.032 EOF 00:09:46.032 )") 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2866364 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2866366 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:46.032 { 00:09:46.032 "params": { 00:09:46.032 "name": "Nvme$subsystem", 00:09:46.032 "trtype": "$TEST_TRANSPORT", 00:09:46.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:46.032 "adrfam": "ipv4", 00:09:46.032 "trsvcid": "$NVMF_PORT", 00:09:46.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:46.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:46.032 "hdgst": ${hdgst:-false}, 00:09:46.032 "ddgst": ${ddgst:-false} 00:09:46.032 }, 00:09:46.032 "method": "bdev_nvme_attach_controller" 00:09:46.032 } 00:09:46.032 EOF 00:09:46.032 )") 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:46.032 { 00:09:46.032 "params": { 00:09:46.032 "name": "Nvme$subsystem", 00:09:46.032 "trtype": "$TEST_TRANSPORT", 00:09:46.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:46.032 "adrfam": "ipv4", 00:09:46.032 "trsvcid": "$NVMF_PORT", 00:09:46.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:46.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:46.032 "hdgst": ${hdgst:-false}, 00:09:46.032 "ddgst": ${ddgst:-false} 00:09:46.032 }, 00:09:46.032 "method": "bdev_nvme_attach_controller" 00:09:46.032 } 00:09:46.032 EOF 00:09:46.032 )") 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:46.032 { 00:09:46.032 "params": { 00:09:46.032 "name": "Nvme$subsystem", 00:09:46.032 "trtype": "$TEST_TRANSPORT", 00:09:46.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:46.032 "adrfam": "ipv4", 00:09:46.032 "trsvcid": "$NVMF_PORT", 00:09:46.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:46.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:46.032 "hdgst": ${hdgst:-false}, 00:09:46.032 "ddgst": ${ddgst:-false} 00:09:46.032 }, 00:09:46.032 "method": "bdev_nvme_attach_controller" 00:09:46.032 } 00:09:46.032 EOF 00:09:46.032 )") 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2866360 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:46.032 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:46.033 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:46.033 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:46.033 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:46.033 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:46.033 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:46.033 "params": { 00:09:46.033 "name": "Nvme1", 00:09:46.033 "trtype": "tcp", 00:09:46.033 "traddr": "10.0.0.2", 00:09:46.033 "adrfam": "ipv4", 00:09:46.033 "trsvcid": "4420", 00:09:46.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:46.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:46.033 "hdgst": false, 00:09:46.033 "ddgst": false 00:09:46.033 }, 00:09:46.033 "method": "bdev_nvme_attach_controller" 00:09:46.033 }' 00:09:46.033 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:46.033 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:46.033 "params": { 00:09:46.033 "name": "Nvme1", 00:09:46.033 "trtype": "tcp", 00:09:46.033 "traddr": "10.0.0.2", 00:09:46.033 "adrfam": "ipv4", 00:09:46.033 "trsvcid": "4420", 00:09:46.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:46.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:46.033 "hdgst": false, 00:09:46.033 "ddgst": false 00:09:46.033 }, 00:09:46.033 "method": "bdev_nvme_attach_controller" 00:09:46.033 }' 00:09:46.033 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:46.033 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:46.033 "params": { 00:09:46.033 "name": "Nvme1", 00:09:46.033 "trtype": "tcp", 00:09:46.033 "traddr": "10.0.0.2", 00:09:46.033 "adrfam": "ipv4", 00:09:46.033 "trsvcid": "4420", 00:09:46.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:46.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:46.033 "hdgst": false, 00:09:46.033 "ddgst": false 00:09:46.033 }, 00:09:46.033 "method": "bdev_nvme_attach_controller" 00:09:46.033 }' 00:09:46.033 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:46.033 00:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:46.033 "params": { 00:09:46.033 "name": "Nvme1", 00:09:46.033 "trtype": "tcp", 00:09:46.033 "traddr": "10.0.0.2", 00:09:46.033 "adrfam": "ipv4", 00:09:46.033 "trsvcid": "4420", 00:09:46.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:46.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:46.033 "hdgst": false, 00:09:46.033 "ddgst": false 00:09:46.033 }, 00:09:46.033 "method": "bdev_nvme_attach_controller" 00:09:46.033 }' 00:09:46.033 [2024-12-06 00:47:18.659703] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:09:46.033 [2024-12-06 00:47:18.659866] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:46.033 [2024-12-06 00:47:18.661673] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:09:46.033 [2024-12-06 00:47:18.661823] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:46.033 [2024-12-06 00:47:18.680532] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:09:46.033 [2024-12-06 00:47:18.680532] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:09:46.033 [2024-12-06 00:47:18.680692] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-06 00:47:18.680692] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:46.033 --proc-type=auto ] 00:09:46.290 [2024-12-06 00:47:18.909512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.290 [2024-12-06 00:47:18.979779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.548 [2024-12-06 00:47:19.031563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:46.548 [2024-12-06 00:47:19.080669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.548 [2024-12-06 00:47:19.098147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:46.548 [2024-12-06 00:47:19.181916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.548 [2024-12-06 00:47:19.203716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:46.806 [2024-12-06 00:47:19.303693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:46.806 Running I/O for 1 seconds... 00:09:47.064 Running I/O for 1 seconds... 00:09:47.064 Running I/O for 1 seconds... 00:09:47.323 Running I/O for 1 seconds... 00:09:47.890 7418.00 IOPS, 28.98 MiB/s 00:09:47.890 Latency(us) 00:09:47.890 [2024-12-05T23:47:20.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.890 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:47.890 Nvme1n1 : 1.01 7480.07 29.22 0.00 0.00 17023.04 3859.34 35729.26 00:09:47.890 [2024-12-05T23:47:20.599Z] =================================================================================================================== 00:09:47.890 [2024-12-05T23:47:20.599Z] Total : 7480.07 29.22 0.00 0.00 17023.04 3859.34 35729.26 00:09:47.890 7313.00 IOPS, 28.57 MiB/s 00:09:47.890 Latency(us) 00:09:47.890 [2024-12-05T23:47:20.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.890 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:47.890 Nvme1n1 : 1.01 7377.19 28.82 0.00 0.00 17259.58 4805.97 26020.22 00:09:47.890 [2024-12-05T23:47:20.599Z] =================================================================================================================== 00:09:47.890 [2024-12-05T23:47:20.599Z] Total : 7377.19 28.82 0.00 0.00 17259.58 4805.97 26020.22 00:09:48.148 6821.00 IOPS, 26.64 MiB/s 00:09:48.148 Latency(us) 00:09:48.148 [2024-12-05T23:47:20.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.148 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:48.148 Nvme1n1 : 1.01 6872.01 26.84 0.00 0.00 18509.26 7281.78 26991.12 00:09:48.148 [2024-12-05T23:47:20.857Z] =================================================================================================================== 00:09:48.148 [2024-12-05T23:47:20.857Z] Total : 6872.01 26.84 0.00 0.00 18509.26 7281.78 26991.12 00:09:48.406 148288.00 IOPS, 579.25 MiB/s 00:09:48.406 Latency(us) 00:09:48.406 [2024-12-05T23:47:21.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.406 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:48.406 Nvme1n1 : 1.00 147973.27 578.02 0.00 0.00 860.49 412.63 2087.44 00:09:48.406 [2024-12-05T23:47:21.115Z] =================================================================================================================== 00:09:48.406 [2024-12-05T23:47:21.115Z] Total : 147973.27 578.02 0.00 0.00 860.49 412.63 2087.44 00:09:48.406 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2866362 00:09:48.973 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2866364 00:09:48.973 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2866366 00:09:48.973 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:48.973 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.973 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:48.973 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.973 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:48.973 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:48.973 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:48.973 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:48.973 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:48.973 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:48.973 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:48.973 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:48.973 rmmod nvme_tcp 00:09:48.973 rmmod nvme_fabrics 00:09:48.973 rmmod nvme_keyring 00:09:48.973 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:48.973 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:48.974 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:48.974 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2866152 ']' 00:09:48.974 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2866152 00:09:48.974 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2866152 ']' 00:09:48.974 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2866152 00:09:48.974 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:48.974 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.974 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2866152 00:09:48.974 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:48.974 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:48.974 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2866152' 00:09:48.974 killing process with pid 2866152 00:09:48.974 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2866152 00:09:48.974 00:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2866152 00:09:50.375 00:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:50.375 00:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:50.375 00:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:50.375 00:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:50.375 00:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:50.375 00:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:50.375 00:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:50.375 00:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:50.375 00:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:50.375 00:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.375 00:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.375 00:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:52.278 00:09:52.278 real 0m10.047s 00:09:52.278 user 0m28.379s 00:09:52.278 sys 0m4.250s 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.278 ************************************ 00:09:52.278 END TEST nvmf_bdev_io_wait 00:09:52.278 ************************************ 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:52.278 ************************************ 00:09:52.278 START TEST nvmf_queue_depth 00:09:52.278 ************************************ 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:52.278 * Looking for test storage... 00:09:52.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:52.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.278 --rc genhtml_branch_coverage=1 00:09:52.278 --rc genhtml_function_coverage=1 00:09:52.278 --rc genhtml_legend=1 00:09:52.278 --rc geninfo_all_blocks=1 00:09:52.278 --rc geninfo_unexecuted_blocks=1 00:09:52.278 00:09:52.278 ' 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:52.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.278 --rc genhtml_branch_coverage=1 00:09:52.278 --rc genhtml_function_coverage=1 00:09:52.278 --rc genhtml_legend=1 00:09:52.278 --rc geninfo_all_blocks=1 00:09:52.278 --rc geninfo_unexecuted_blocks=1 00:09:52.278 00:09:52.278 ' 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:52.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.278 --rc genhtml_branch_coverage=1 00:09:52.278 --rc genhtml_function_coverage=1 00:09:52.278 --rc genhtml_legend=1 00:09:52.278 --rc geninfo_all_blocks=1 00:09:52.278 --rc geninfo_unexecuted_blocks=1 00:09:52.278 00:09:52.278 ' 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:52.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.278 --rc genhtml_branch_coverage=1 00:09:52.278 --rc genhtml_function_coverage=1 00:09:52.278 --rc genhtml_legend=1 00:09:52.278 --rc geninfo_all_blocks=1 00:09:52.278 --rc geninfo_unexecuted_blocks=1 00:09:52.278 00:09:52.278 ' 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.278 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:52.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:52.279 00:47:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.810 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:54.811 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:54.811 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:54.811 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:54.811 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.811 00:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:54.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:09:54.811 00:09:54.811 --- 10.0.0.2 ping statistics --- 00:09:54.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.811 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:09:54.811 00:09:54.811 --- 10.0.0.1 ping statistics --- 00:09:54.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.811 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2868750 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2868750 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2868750 ']' 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.811 00:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:54.811 [2024-12-06 00:47:27.206330] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:09:54.812 [2024-12-06 00:47:27.206477] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.812 [2024-12-06 00:47:27.361747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.812 [2024-12-06 00:47:27.495218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.812 [2024-12-06 00:47:27.495325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.812 [2024-12-06 00:47:27.495358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.812 [2024-12-06 00:47:27.495384] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.812 [2024-12-06 00:47:27.495405] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.812 [2024-12-06 00:47:27.497060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.747 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.747 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:55.747 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:55.747 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:55.747 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:55.747 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:55.748 [2024-12-06 00:47:28.203983] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:55.748 Malloc0 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:55.748 [2024-12-06 00:47:28.325511] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2868904 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2868904 /var/tmp/bdevperf.sock 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2868904 ']' 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:55.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.748 00:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:55.748 [2024-12-06 00:47:28.419793] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:09:55.748 [2024-12-06 00:47:28.419965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2868904 ] 00:09:56.007 [2024-12-06 00:47:28.566939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.007 [2024-12-06 00:47:28.702754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.983 00:47:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.983 00:47:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:56.983 00:47:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:56.983 00:47:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.983 00:47:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.983 NVMe0n1 00:09:56.983 00:47:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.983 00:47:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:57.256 Running I/O for 10 seconds... 00:09:59.123 5468.00 IOPS, 21.36 MiB/s [2024-12-05T23:47:32.766Z] 5822.00 IOPS, 22.74 MiB/s [2024-12-05T23:47:34.143Z] 6005.67 IOPS, 23.46 MiB/s [2024-12-05T23:47:35.078Z] 6017.50 IOPS, 23.51 MiB/s [2024-12-05T23:47:36.010Z] 5995.40 IOPS, 23.42 MiB/s [2024-12-05T23:47:36.945Z] 6012.50 IOPS, 23.49 MiB/s [2024-12-05T23:47:37.878Z] 6027.86 IOPS, 23.55 MiB/s [2024-12-05T23:47:38.811Z] 6050.12 IOPS, 23.63 MiB/s [2024-12-05T23:47:40.183Z] 6056.33 IOPS, 23.66 MiB/s [2024-12-05T23:47:40.183Z] 6061.00 IOPS, 23.68 MiB/s 00:10:07.474 Latency(us) 00:10:07.474 [2024-12-05T23:47:40.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.474 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:07.474 Verification LBA range: start 0x0 length 0x4000 00:10:07.474 NVMe0n1 : 10.14 6075.34 23.73 0.00 0.00 166835.63 18932.62 100197.26 00:10:07.474 [2024-12-05T23:47:40.183Z] =================================================================================================================== 00:10:07.474 [2024-12-05T23:47:40.183Z] Total : 6075.34 23.73 0.00 0.00 166835.63 18932.62 100197.26 00:10:07.474 { 00:10:07.474 "results": [ 00:10:07.474 { 00:10:07.474 "job": "NVMe0n1", 00:10:07.474 "core_mask": "0x1", 00:10:07.474 "workload": "verify", 00:10:07.474 "status": "finished", 00:10:07.474 "verify_range": { 00:10:07.474 "start": 0, 00:10:07.474 "length": 16384 00:10:07.474 }, 00:10:07.474 "queue_depth": 1024, 00:10:07.474 "io_size": 4096, 00:10:07.474 "runtime": 10.137545, 00:10:07.474 "iops": 6075.33678025597, 00:10:07.474 "mibps": 23.73178429787488, 00:10:07.474 "io_failed": 0, 00:10:07.474 "io_timeout": 0, 00:10:07.474 "avg_latency_us": 166835.63337185633, 00:10:07.474 "min_latency_us": 18932.62222222222, 00:10:07.474 "max_latency_us": 100197.26222222223 00:10:07.474 } 00:10:07.474 ], 00:10:07.474 "core_count": 1 00:10:07.474 } 00:10:07.474 00:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2868904 00:10:07.474 00:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2868904 ']' 00:10:07.474 00:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2868904 00:10:07.474 00:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:07.474 00:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.474 00:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2868904 00:10:07.474 00:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.474 00:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.474 00:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2868904' 00:10:07.474 killing process with pid 2868904 00:10:07.474 00:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2868904 00:10:07.474 Received shutdown signal, test time was about 10.000000 seconds 00:10:07.474 00:10:07.474 Latency(us) 00:10:07.474 [2024-12-05T23:47:40.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.474 [2024-12-05T23:47:40.183Z] =================================================================================================================== 00:10:07.474 [2024-12-05T23:47:40.183Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:07.474 00:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2868904 00:10:08.407 00:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:08.407 00:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:08.407 00:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:08.407 00:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:08.407 00:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.407 00:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:08.407 00:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.407 00:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.407 rmmod nvme_tcp 00:10:08.407 rmmod nvme_fabrics 00:10:08.407 rmmod nvme_keyring 00:10:08.407 00:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.407 00:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:08.407 00:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:08.407 00:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2868750 ']' 00:10:08.407 00:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2868750 00:10:08.407 00:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2868750 ']' 00:10:08.407 00:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2868750 00:10:08.407 00:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:08.407 00:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.407 00:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2868750 00:10:08.407 00:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:08.407 00:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:08.407 00:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2868750' 00:10:08.407 killing process with pid 2868750 00:10:08.407 00:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2868750 00:10:08.407 00:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2868750 00:10:09.783 00:47:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:09.783 00:47:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:09.783 00:47:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:09.783 00:47:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:09.783 00:47:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:09.783 00:47:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:09.783 00:47:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:09.783 00:47:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.783 00:47:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:09.783 00:47:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.783 00:47:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.783 00:47:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.330 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:12.330 00:10:12.330 real 0m19.674s 00:10:12.330 user 0m28.159s 00:10:12.330 sys 0m3.276s 00:10:12.330 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.330 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.330 ************************************ 00:10:12.330 END TEST nvmf_queue_depth 00:10:12.330 ************************************ 00:10:12.330 00:47:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:12.330 00:47:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:12.330 00:47:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.330 00:47:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:12.330 ************************************ 00:10:12.330 START TEST nvmf_target_multipath 00:10:12.330 ************************************ 00:10:12.330 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:12.330 * Looking for test storage... 00:10:12.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:12.330 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:12.330 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:10:12.330 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:12.330 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:12.330 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.330 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.330 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:12.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.331 --rc genhtml_branch_coverage=1 00:10:12.331 --rc genhtml_function_coverage=1 00:10:12.331 --rc genhtml_legend=1 00:10:12.331 --rc geninfo_all_blocks=1 00:10:12.331 --rc geninfo_unexecuted_blocks=1 00:10:12.331 00:10:12.331 ' 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:12.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.331 --rc genhtml_branch_coverage=1 00:10:12.331 --rc genhtml_function_coverage=1 00:10:12.331 --rc genhtml_legend=1 00:10:12.331 --rc geninfo_all_blocks=1 00:10:12.331 --rc geninfo_unexecuted_blocks=1 00:10:12.331 00:10:12.331 ' 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:12.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.331 --rc genhtml_branch_coverage=1 00:10:12.331 --rc genhtml_function_coverage=1 00:10:12.331 --rc genhtml_legend=1 00:10:12.331 --rc geninfo_all_blocks=1 00:10:12.331 --rc geninfo_unexecuted_blocks=1 00:10:12.331 00:10:12.331 ' 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:12.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.331 --rc genhtml_branch_coverage=1 00:10:12.331 --rc genhtml_function_coverage=1 00:10:12.331 --rc genhtml_legend=1 00:10:12.331 --rc geninfo_all_blocks=1 00:10:12.331 --rc geninfo_unexecuted_blocks=1 00:10:12.331 00:10:12.331 ' 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:12.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.331 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:12.332 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:12.332 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:12.332 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.332 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.332 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.332 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:12.332 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:12.332 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:12.332 00:47:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:14.231 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:14.231 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.231 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:14.232 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:14.232 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:14.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:10:14.232 00:10:14.232 --- 10.0.0.2 ping statistics --- 00:10:14.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.232 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:10:14.232 00:10:14.232 --- 10.0.0.1 ping statistics --- 00:10:14.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.232 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:14.232 only one NIC for nvmf test 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:14.232 rmmod nvme_tcp 00:10:14.232 rmmod nvme_fabrics 00:10:14.232 rmmod nvme_keyring 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:14.232 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:14.491 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:14.491 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:14.491 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.491 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.491 00:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.396 00:47:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:16.396 00:47:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:16.397 00:47:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:16.397 00:47:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:16.397 00:47:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:16.397 00:47:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.397 00:47:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:16.397 00:47:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.397 00:47:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.397 00:47:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.397 00:47:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:16.397 00:47:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:16.397 00:47:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:16.397 00:47:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:16.397 00:47:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:16.397 00:47:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:16.397 00:47:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:16.397 00:47:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:16.397 00:47:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:16.397 00:47:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:16.397 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:16.397 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:16.397 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.397 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.397 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.397 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:16.397 00:10:16.397 real 0m4.520s 00:10:16.397 user 0m0.933s 00:10:16.397 sys 0m1.581s 00:10:16.397 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.397 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:16.397 ************************************ 00:10:16.397 END TEST nvmf_target_multipath 00:10:16.397 ************************************ 00:10:16.397 00:47:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:16.397 00:47:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:16.397 00:47:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.397 00:47:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:16.397 ************************************ 00:10:16.397 START TEST nvmf_zcopy 00:10:16.397 ************************************ 00:10:16.397 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:16.656 * Looking for test storage... 00:10:16.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:16.656 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:16.656 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:10:16.656 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:16.656 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:16.656 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.656 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.656 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.656 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:16.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.657 --rc genhtml_branch_coverage=1 00:10:16.657 --rc genhtml_function_coverage=1 00:10:16.657 --rc genhtml_legend=1 00:10:16.657 --rc geninfo_all_blocks=1 00:10:16.657 --rc geninfo_unexecuted_blocks=1 00:10:16.657 00:10:16.657 ' 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:16.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.657 --rc genhtml_branch_coverage=1 00:10:16.657 --rc genhtml_function_coverage=1 00:10:16.657 --rc genhtml_legend=1 00:10:16.657 --rc geninfo_all_blocks=1 00:10:16.657 --rc geninfo_unexecuted_blocks=1 00:10:16.657 00:10:16.657 ' 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:16.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.657 --rc genhtml_branch_coverage=1 00:10:16.657 --rc genhtml_function_coverage=1 00:10:16.657 --rc genhtml_legend=1 00:10:16.657 --rc geninfo_all_blocks=1 00:10:16.657 --rc geninfo_unexecuted_blocks=1 00:10:16.657 00:10:16.657 ' 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:16.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.657 --rc genhtml_branch_coverage=1 00:10:16.657 --rc genhtml_function_coverage=1 00:10:16.657 --rc genhtml_legend=1 00:10:16.657 --rc geninfo_all_blocks=1 00:10:16.657 --rc geninfo_unexecuted_blocks=1 00:10:16.657 00:10:16.657 ' 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:16.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:16.657 00:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:18.560 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.560 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:18.819 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:18.819 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:18.819 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:18.819 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:18.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:10:18.820 00:10:18.820 --- 10.0.0.2 ping statistics --- 00:10:18.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.820 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:18.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:10:18.820 00:10:18.820 --- 10.0.0.1 ping statistics --- 00:10:18.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.820 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2874499 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2874499 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2874499 ']' 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.820 00:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.078 [2024-12-06 00:47:51.533220] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:10:19.078 [2024-12-06 00:47:51.533380] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.078 [2024-12-06 00:47:51.678681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.337 [2024-12-06 00:47:51.811928] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.337 [2024-12-06 00:47:51.812012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.337 [2024-12-06 00:47:51.812038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.337 [2024-12-06 00:47:51.812062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.337 [2024-12-06 00:47:51.812091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.337 [2024-12-06 00:47:51.813751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.902 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.902 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:19.902 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:19.902 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:19.902 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.902 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.903 [2024-12-06 00:47:52.522010] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.903 [2024-12-06 00:47:52.538276] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.903 malloc0 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:19.903 { 00:10:19.903 "params": { 00:10:19.903 "name": "Nvme$subsystem", 00:10:19.903 "trtype": "$TEST_TRANSPORT", 00:10:19.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:19.903 "adrfam": "ipv4", 00:10:19.903 "trsvcid": "$NVMF_PORT", 00:10:19.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:19.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:19.903 "hdgst": ${hdgst:-false}, 00:10:19.903 "ddgst": ${ddgst:-false} 00:10:19.903 }, 00:10:19.903 "method": "bdev_nvme_attach_controller" 00:10:19.903 } 00:10:19.903 EOF 00:10:19.903 )") 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:19.903 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:20.161 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:20.161 00:47:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:20.161 "params": { 00:10:20.161 "name": "Nvme1", 00:10:20.161 "trtype": "tcp", 00:10:20.161 "traddr": "10.0.0.2", 00:10:20.161 "adrfam": "ipv4", 00:10:20.161 "trsvcid": "4420", 00:10:20.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:20.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:20.161 "hdgst": false, 00:10:20.161 "ddgst": false 00:10:20.161 }, 00:10:20.161 "method": "bdev_nvme_attach_controller" 00:10:20.161 }' 00:10:20.161 [2024-12-06 00:47:52.697436] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:10:20.161 [2024-12-06 00:47:52.697587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2874654 ] 00:10:20.161 [2024-12-06 00:47:52.855685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.419 [2024-12-06 00:47:52.997924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.985 Running I/O for 10 seconds... 00:10:22.936 4168.00 IOPS, 32.56 MiB/s [2024-12-05T23:47:57.021Z] 4245.50 IOPS, 33.17 MiB/s [2024-12-05T23:47:57.956Z] 4233.00 IOPS, 33.07 MiB/s [2024-12-05T23:47:58.914Z] 4256.75 IOPS, 33.26 MiB/s [2024-12-05T23:47:59.951Z] 4260.20 IOPS, 33.28 MiB/s [2024-12-05T23:48:00.888Z] 4260.83 IOPS, 33.29 MiB/s [2024-12-05T23:48:01.823Z] 4264.00 IOPS, 33.31 MiB/s [2024-12-05T23:48:02.755Z] 4271.88 IOPS, 33.37 MiB/s [2024-12-05T23:48:03.690Z] 4271.67 IOPS, 33.37 MiB/s [2024-12-05T23:48:03.690Z] 4278.10 IOPS, 33.42 MiB/s 00:10:30.981 Latency(us) 00:10:30.981 [2024-12-05T23:48:03.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.981 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:30.981 Verification LBA range: start 0x0 length 0x1000 00:10:30.981 Nvme1n1 : 10.02 4280.44 33.44 0.00 0.00 29821.09 4563.25 40195.41 00:10:30.981 [2024-12-05T23:48:03.690Z] =================================================================================================================== 00:10:30.981 [2024-12-05T23:48:03.690Z] Total : 4280.44 33.44 0.00 0.00 29821.09 4563.25 40195.41 00:10:31.916 00:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2876102 00:10:31.916 00:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:31.916 00:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.916 00:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:31.916 00:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:31.916 00:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:31.916 00:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:31.916 00:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:31.916 00:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:31.916 { 00:10:31.916 "params": { 00:10:31.916 "name": "Nvme$subsystem", 00:10:31.916 "trtype": "$TEST_TRANSPORT", 00:10:31.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:31.916 "adrfam": "ipv4", 00:10:31.916 "trsvcid": "$NVMF_PORT", 00:10:31.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:31.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:31.916 "hdgst": ${hdgst:-false}, 00:10:31.916 "ddgst": ${ddgst:-false} 00:10:31.916 }, 00:10:31.916 "method": "bdev_nvme_attach_controller" 00:10:31.916 } 00:10:31.916 EOF 00:10:31.916 )") 00:10:31.916 00:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:31.916 [2024-12-06 00:48:04.623574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.916 [2024-12-06 00:48:04.623636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.916 00:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:32.175 00:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:32.175 00:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:32.175 "params": { 00:10:32.175 "name": "Nvme1", 00:10:32.175 "trtype": "tcp", 00:10:32.175 "traddr": "10.0.0.2", 00:10:32.175 "adrfam": "ipv4", 00:10:32.175 "trsvcid": "4420", 00:10:32.175 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:32.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:32.175 "hdgst": false, 00:10:32.175 "ddgst": false 00:10:32.175 }, 00:10:32.175 "method": "bdev_nvme_attach_controller" 00:10:32.175 }' 00:10:32.175 [2024-12-06 00:48:04.631521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.631557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.639499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.639530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.647525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.647555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.655581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.655613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.663593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.663630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.671594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.671623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.679621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.679649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.687632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.687660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.695667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.695695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.703679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.703709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.705672] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:10:32.175 [2024-12-06 00:48:04.705800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2876102 ] 00:10:32.175 [2024-12-06 00:48:04.711741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.711772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.719744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.719773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.727738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.727766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.735807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.735846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.743823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.743866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.751851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.751892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.759892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.759921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.767877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.767906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.775913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.775942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.783950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.783980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.791965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.791995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.799992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.800023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.175 [2024-12-06 00:48:04.808010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.175 [2024-12-06 00:48:04.808039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 00:48:04.816007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 00:48:04.816037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 00:48:04.824039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 00:48:04.824067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 00:48:04.832061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 00:48:04.832107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 00:48:04.840086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 00:48:04.840134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 00:48:04.848142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 00:48:04.848185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 00:48:04.856139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 00:48:04.856172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 00:48:04.859423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.176 [2024-12-06 00:48:04.864183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 00:48:04.864216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 00:48:04.872209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 00:48:04.872243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 00:48:04.880292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 00:48:04.880350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.434 [2024-12-06 00:48:04.888298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.434 [2024-12-06 00:48:04.888341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.434 [2024-12-06 00:48:04.896253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.434 [2024-12-06 00:48:04.896287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.434 [2024-12-06 00:48:04.904300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.434 [2024-12-06 00:48:04.904334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.434 [2024-12-06 00:48:04.912322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.434 [2024-12-06 00:48:04.912356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.434 [2024-12-06 00:48:04.920332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.434 [2024-12-06 00:48:04.920366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.434 [2024-12-06 00:48:04.928375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.434 [2024-12-06 00:48:04.928408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.434 [2024-12-06 00:48:04.936402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.434 [2024-12-06 00:48:04.936437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.434 [2024-12-06 00:48:04.944424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.434 [2024-12-06 00:48:04.944458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.434 [2024-12-06 00:48:04.952436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.434 [2024-12-06 00:48:04.952469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.434 [2024-12-06 00:48:04.960444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.434 [2024-12-06 00:48:04.960477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.434 [2024-12-06 00:48:04.968483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:04.968517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.435 [2024-12-06 00:48:04.976504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:04.976537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.435 [2024-12-06 00:48:04.984512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:04.984544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.435 [2024-12-06 00:48:04.992561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:04.992594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.435 [2024-12-06 00:48:04.999377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.435 [2024-12-06 00:48:05.000577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:05.000610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.435 [2024-12-06 00:48:05.008585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:05.008618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.435 [2024-12-06 00:48:05.016686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:05.016738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.435 [2024-12-06 00:48:05.024703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:05.024755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.435 [2024-12-06 00:48:05.032676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:05.032708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.435 [2024-12-06 00:48:05.040707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:05.040741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.435 [2024-12-06 00:48:05.048720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:05.048753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.435 [2024-12-06 00:48:05.056733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:05.056766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.435 [2024-12-06 00:48:05.064754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:05.064787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.435 [2024-12-06 00:48:05.072762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:05.072794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.435 [2024-12-06 00:48:05.080825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:05.080859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.435 [2024-12-06 00:48:05.088882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:05.088924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.435 [2024-12-06 00:48:05.096926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:05.096974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.435 [2024-12-06 00:48:05.104967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:05.105017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.435 [2024-12-06 00:48:05.112969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:05.113021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.435 [2024-12-06 00:48:05.120977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:05.121019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.435 [2024-12-06 00:48:05.128957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:05.128986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.435 [2024-12-06 00:48:05.136991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.435 [2024-12-06 00:48:05.137022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.145016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.145058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.153019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.153051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.161033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.161064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.169058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.169088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.177060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.177107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.185133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.185175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.193141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.193175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.201184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.201218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.209186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.209220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.217192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.217225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.225242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.225275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.233286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.233319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.241280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.241314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.249377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.249430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.257396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.257450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.265398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.265450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.273381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.273414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.281385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.281417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.289415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.289448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.297448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.297489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.305474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.305508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.313486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.313518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.321509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.321542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.329538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.329571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.337554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.337588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.345567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.345600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.353604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.353638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.361621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.361654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.369636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.369670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.377692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.377727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.385694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.385727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.693 [2024-12-06 00:48:05.393742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.693 [2024-12-06 00:48:05.393779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.401757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.401795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.409767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.409805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.417829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.417880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.425828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.425876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.433839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.433885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.441887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.441917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.449913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.449959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.457924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.457956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.465970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.466002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.473969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.474012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.482011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.482043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.490030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.490065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.498027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.498059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 Running I/O for 5 seconds... 00:10:32.951 [2024-12-06 00:48:05.506077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.506126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.523406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.523447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.538298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.538340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.552756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.552796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.567513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.567552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.582173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.582213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.597312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.597352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.612860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.612913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.627742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.627782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.642792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.642842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.951 [2024-12-06 00:48:05.657548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.951 [2024-12-06 00:48:05.657589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.209 [2024-12-06 00:48:05.672560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.209 [2024-12-06 00:48:05.672601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.209 [2024-12-06 00:48:05.688790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.209 [2024-12-06 00:48:05.688840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.209 [2024-12-06 00:48:05.703575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.209 [2024-12-06 00:48:05.703616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.209 [2024-12-06 00:48:05.718724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.209 [2024-12-06 00:48:05.718764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.209 [2024-12-06 00:48:05.734128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.209 [2024-12-06 00:48:05.734168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.209 [2024-12-06 00:48:05.749038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.209 [2024-12-06 00:48:05.749075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.209 [2024-12-06 00:48:05.764149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.209 [2024-12-06 00:48:05.764202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.209 [2024-12-06 00:48:05.778781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.209 [2024-12-06 00:48:05.778831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.209 [2024-12-06 00:48:05.794066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.209 [2024-12-06 00:48:05.794118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.209 [2024-12-06 00:48:05.809342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.209 [2024-12-06 00:48:05.809383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.209 [2024-12-06 00:48:05.824539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.209 [2024-12-06 00:48:05.824579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.209 [2024-12-06 00:48:05.839230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.209 [2024-12-06 00:48:05.839271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.209 [2024-12-06 00:48:05.854694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.209 [2024-12-06 00:48:05.854735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.209 [2024-12-06 00:48:05.869772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.209 [2024-12-06 00:48:05.869813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.209 [2024-12-06 00:48:05.885752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.209 [2024-12-06 00:48:05.885793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.209 [2024-12-06 00:48:05.901516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.209 [2024-12-06 00:48:05.901565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.466 [2024-12-06 00:48:05.917307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.466 [2024-12-06 00:48:05.917348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.466 [2024-12-06 00:48:05.932908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.466 [2024-12-06 00:48:05.932946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.466 [2024-12-06 00:48:05.948940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.466 [2024-12-06 00:48:05.948976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.466 [2024-12-06 00:48:05.964088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.466 [2024-12-06 00:48:05.964143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.466 [2024-12-06 00:48:05.978918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.466 [2024-12-06 00:48:05.978955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.466 [2024-12-06 00:48:05.993993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.466 [2024-12-06 00:48:05.994029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.466 [2024-12-06 00:48:06.009414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.466 [2024-12-06 00:48:06.009454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.466 [2024-12-06 00:48:06.024892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.466 [2024-12-06 00:48:06.024928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.466 [2024-12-06 00:48:06.040036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.466 [2024-12-06 00:48:06.040074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.466 [2024-12-06 00:48:06.052529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.466 [2024-12-06 00:48:06.052569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.466 [2024-12-06 00:48:06.066192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.466 [2024-12-06 00:48:06.066232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.467 [2024-12-06 00:48:06.080932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.467 [2024-12-06 00:48:06.080968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.467 [2024-12-06 00:48:06.096100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.467 [2024-12-06 00:48:06.096136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.467 [2024-12-06 00:48:06.110868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.467 [2024-12-06 00:48:06.110905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.467 [2024-12-06 00:48:06.126933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.467 [2024-12-06 00:48:06.126969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.467 [2024-12-06 00:48:06.141839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.467 [2024-12-06 00:48:06.141892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.467 [2024-12-06 00:48:06.156677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.467 [2024-12-06 00:48:06.156716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.467 [2024-12-06 00:48:06.172038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.467 [2024-12-06 00:48:06.172075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.724 [2024-12-06 00:48:06.184752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.724 [2024-12-06 00:48:06.184792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.724 [2024-12-06 00:48:06.199898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.724 [2024-12-06 00:48:06.199934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.724 [2024-12-06 00:48:06.215345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.724 [2024-12-06 00:48:06.215386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.724 [2024-12-06 00:48:06.230381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.724 [2024-12-06 00:48:06.230422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.724 [2024-12-06 00:48:06.245493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.724 [2024-12-06 00:48:06.245542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.724 [2024-12-06 00:48:06.259488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.724 [2024-12-06 00:48:06.259527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.724 [2024-12-06 00:48:06.273905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.724 [2024-12-06 00:48:06.273942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.724 [2024-12-06 00:48:06.288788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.724 [2024-12-06 00:48:06.288838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.724 [2024-12-06 00:48:06.304057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.724 [2024-12-06 00:48:06.304096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.724 [2024-12-06 00:48:06.319595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.724 [2024-12-06 00:48:06.319635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.724 [2024-12-06 00:48:06.334407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.724 [2024-12-06 00:48:06.334447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.724 [2024-12-06 00:48:06.349672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.724 [2024-12-06 00:48:06.349712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.724 [2024-12-06 00:48:06.363409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.724 [2024-12-06 00:48:06.363454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.724 [2024-12-06 00:48:06.377771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.724 [2024-12-06 00:48:06.377811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.724 [2024-12-06 00:48:06.392711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.724 [2024-12-06 00:48:06.392751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.724 [2024-12-06 00:48:06.407749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.724 [2024-12-06 00:48:06.407789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.724 [2024-12-06 00:48:06.424388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.724 [2024-12-06 00:48:06.424428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.982 [2024-12-06 00:48:06.440897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.982 [2024-12-06 00:48:06.440934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.982 [2024-12-06 00:48:06.456231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.982 [2024-12-06 00:48:06.456272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.982 [2024-12-06 00:48:06.471526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.982 [2024-12-06 00:48:06.471566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.982 [2024-12-06 00:48:06.487188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.982 [2024-12-06 00:48:06.487229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.982 [2024-12-06 00:48:06.502547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.982 [2024-12-06 00:48:06.502586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.982 8342.00 IOPS, 65.17 MiB/s [2024-12-05T23:48:06.691Z] [2024-12-06 00:48:06.515422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.982 [2024-12-06 00:48:06.515462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.982 [2024-12-06 00:48:06.529887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.982 [2024-12-06 00:48:06.529933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.982 [2024-12-06 00:48:06.544249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.982 [2024-12-06 00:48:06.544288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.982 [2024-12-06 00:48:06.558623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.982 [2024-12-06 00:48:06.558660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.982 [2024-12-06 00:48:06.572732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.982 [2024-12-06 00:48:06.572768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.982 [2024-12-06 00:48:06.586444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.982 [2024-12-06 00:48:06.586480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.982 [2024-12-06 00:48:06.600130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.982 [2024-12-06 00:48:06.600167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.982 [2024-12-06 00:48:06.614403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.982 [2024-12-06 00:48:06.614439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.982 [2024-12-06 00:48:06.628289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.982 [2024-12-06 00:48:06.628325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.982 [2024-12-06 00:48:06.641714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.982 [2024-12-06 00:48:06.641751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.982 [2024-12-06 00:48:06.655533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.982 [2024-12-06 00:48:06.655570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.982 [2024-12-06 00:48:06.669101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.982 [2024-12-06 00:48:06.669137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.982 [2024-12-06 00:48:06.683146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.982 [2024-12-06 00:48:06.683182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.240 [2024-12-06 00:48:06.697103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.240 [2024-12-06 00:48:06.697142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.240 [2024-12-06 00:48:06.710838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.240 [2024-12-06 00:48:06.710897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.240 [2024-12-06 00:48:06.725112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.240 [2024-12-06 00:48:06.725149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.240 [2024-12-06 00:48:06.739537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.240 [2024-12-06 00:48:06.739573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.240 [2024-12-06 00:48:06.753098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.240 [2024-12-06 00:48:06.753134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.240 [2024-12-06 00:48:06.766854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.240 [2024-12-06 00:48:06.766890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.240 [2024-12-06 00:48:06.780856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.240 [2024-12-06 00:48:06.780892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.240 [2024-12-06 00:48:06.794665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.240 [2024-12-06 00:48:06.794712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.240 [2024-12-06 00:48:06.808481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.240 [2024-12-06 00:48:06.808518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.240 [2024-12-06 00:48:06.822157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.240 [2024-12-06 00:48:06.822193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.240 [2024-12-06 00:48:06.836035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.240 [2024-12-06 00:48:06.836071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.240 [2024-12-06 00:48:06.849668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.240 [2024-12-06 00:48:06.849704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.240 [2024-12-06 00:48:06.863474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.240 [2024-12-06 00:48:06.863511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.240 [2024-12-06 00:48:06.877212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.240 [2024-12-06 00:48:06.877248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.240 [2024-12-06 00:48:06.891661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.240 [2024-12-06 00:48:06.891698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.240 [2024-12-06 00:48:06.907274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.240 [2024-12-06 00:48:06.907315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.240 [2024-12-06 00:48:06.922758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.240 [2024-12-06 00:48:06.922797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.240 [2024-12-06 00:48:06.937429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.240 [2024-12-06 00:48:06.937468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.499 [2024-12-06 00:48:06.952667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.499 [2024-12-06 00:48:06.952707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.499 [2024-12-06 00:48:06.967686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.499 [2024-12-06 00:48:06.967725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.499 [2024-12-06 00:48:06.983020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.499 [2024-12-06 00:48:06.983066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.499 [2024-12-06 00:48:06.997982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.499 [2024-12-06 00:48:06.998019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.499 [2024-12-06 00:48:07.013433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.499 [2024-12-06 00:48:07.013474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.499 [2024-12-06 00:48:07.028790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.499 [2024-12-06 00:48:07.028842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.499 [2024-12-06 00:48:07.044070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.499 [2024-12-06 00:48:07.044124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.499 [2024-12-06 00:48:07.059153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.499 [2024-12-06 00:48:07.059195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.499 [2024-12-06 00:48:07.073775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.499 [2024-12-06 00:48:07.073827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.499 [2024-12-06 00:48:07.088742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.499 [2024-12-06 00:48:07.088782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.499 [2024-12-06 00:48:07.103674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.499 [2024-12-06 00:48:07.103713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.499 [2024-12-06 00:48:07.119334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.499 [2024-12-06 00:48:07.119375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.499 [2024-12-06 00:48:07.132955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.499 [2024-12-06 00:48:07.132992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.499 [2024-12-06 00:48:07.147641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.499 [2024-12-06 00:48:07.147681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.499 [2024-12-06 00:48:07.162812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.499 [2024-12-06 00:48:07.162875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.499 [2024-12-06 00:48:07.177325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.499 [2024-12-06 00:48:07.177365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.499 [2024-12-06 00:48:07.192073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.499 [2024-12-06 00:48:07.192109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.757 [2024-12-06 00:48:07.207759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.757 [2024-12-06 00:48:07.207800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.757 [2024-12-06 00:48:07.222509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.757 [2024-12-06 00:48:07.222548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.757 [2024-12-06 00:48:07.237731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.757 [2024-12-06 00:48:07.237771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.757 [2024-12-06 00:48:07.250744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.757 [2024-12-06 00:48:07.250784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.757 [2024-12-06 00:48:07.264910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.757 [2024-12-06 00:48:07.264948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.757 [2024-12-06 00:48:07.279926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.757 [2024-12-06 00:48:07.279963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.757 [2024-12-06 00:48:07.295052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.757 [2024-12-06 00:48:07.295089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.757 [2024-12-06 00:48:07.310174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.757 [2024-12-06 00:48:07.310216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.757 [2024-12-06 00:48:07.325901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.757 [2024-12-06 00:48:07.325938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.758 [2024-12-06 00:48:07.341290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.758 [2024-12-06 00:48:07.341329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.758 [2024-12-06 00:48:07.356428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.758 [2024-12-06 00:48:07.356468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.758 [2024-12-06 00:48:07.371923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.758 [2024-12-06 00:48:07.371960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.758 [2024-12-06 00:48:07.386954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.758 [2024-12-06 00:48:07.386991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.758 [2024-12-06 00:48:07.402486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.758 [2024-12-06 00:48:07.402527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.758 [2024-12-06 00:48:07.417905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.758 [2024-12-06 00:48:07.417942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.758 [2024-12-06 00:48:07.432644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.758 [2024-12-06 00:48:07.432684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.758 [2024-12-06 00:48:07.447202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.758 [2024-12-06 00:48:07.447254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.758 [2024-12-06 00:48:07.461678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.758 [2024-12-06 00:48:07.461716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.016 [2024-12-06 00:48:07.476088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.016 [2024-12-06 00:48:07.476134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.016 [2024-12-06 00:48:07.492373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.016 [2024-12-06 00:48:07.492414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.016 [2024-12-06 00:48:07.507573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.016 [2024-12-06 00:48:07.507613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.016 8502.00 IOPS, 66.42 MiB/s [2024-12-05T23:48:07.725Z] [2024-12-06 00:48:07.521988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.016 [2024-12-06 00:48:07.522024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.016 [2024-12-06 00:48:07.536242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.016 [2024-12-06 00:48:07.536282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.016 [2024-12-06 00:48:07.550649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.016 [2024-12-06 00:48:07.550700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.016 [2024-12-06 00:48:07.565005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.016 [2024-12-06 00:48:07.565041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.016 [2024-12-06 00:48:07.580535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.016 [2024-12-06 00:48:07.580575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.016 [2024-12-06 00:48:07.596494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.016 [2024-12-06 00:48:07.596535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.016 [2024-12-06 00:48:07.611430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.016 [2024-12-06 00:48:07.611469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.016 [2024-12-06 00:48:07.626446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.016 [2024-12-06 00:48:07.626485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.016 [2024-12-06 00:48:07.641655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.016 [2024-12-06 00:48:07.641694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.016 [2024-12-06 00:48:07.656829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.016 [2024-12-06 00:48:07.656891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.016 [2024-12-06 00:48:07.672121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.016 [2024-12-06 00:48:07.672156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.016 [2024-12-06 00:48:07.687783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.016 [2024-12-06 00:48:07.687834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.016 [2024-12-06 00:48:07.703400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.016 [2024-12-06 00:48:07.703441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.016 [2024-12-06 00:48:07.718812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.016 [2024-12-06 00:48:07.718878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.275 [2024-12-06 00:48:07.731673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.275 [2024-12-06 00:48:07.731714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.275 [2024-12-06 00:48:07.746568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.275 [2024-12-06 00:48:07.746610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.275 [2024-12-06 00:48:07.761617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.275 [2024-12-06 00:48:07.761657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.275 [2024-12-06 00:48:07.776279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.275 [2024-12-06 00:48:07.776319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.275 [2024-12-06 00:48:07.791412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.275 [2024-12-06 00:48:07.791452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.275 [2024-12-06 00:48:07.806036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.275 [2024-12-06 00:48:07.806073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.275 [2024-12-06 00:48:07.820029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.275 [2024-12-06 00:48:07.820066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.275 [2024-12-06 00:48:07.835649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.275 [2024-12-06 00:48:07.835689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.275 [2024-12-06 00:48:07.851757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.275 [2024-12-06 00:48:07.851798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.275 [2024-12-06 00:48:07.866002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.275 [2024-12-06 00:48:07.866038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.275 [2024-12-06 00:48:07.881492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.275 [2024-12-06 00:48:07.881534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.275 [2024-12-06 00:48:07.896156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.275 [2024-12-06 00:48:07.896207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.275 [2024-12-06 00:48:07.911095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.275 [2024-12-06 00:48:07.911152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.275 [2024-12-06 00:48:07.925287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.275 [2024-12-06 00:48:07.925324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.275 [2024-12-06 00:48:07.939451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.275 [2024-12-06 00:48:07.939487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.275 [2024-12-06 00:48:07.953514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.275 [2024-12-06 00:48:07.953550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.275 [2024-12-06 00:48:07.968164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.275 [2024-12-06 00:48:07.968200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.275 [2024-12-06 00:48:07.982330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.275 [2024-12-06 00:48:07.982368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.534 [2024-12-06 00:48:07.996412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.535 [2024-12-06 00:48:07.996449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.535 [2024-12-06 00:48:08.010758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.535 [2024-12-06 00:48:08.010797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.535 [2024-12-06 00:48:08.024968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.535 [2024-12-06 00:48:08.025006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.535 [2024-12-06 00:48:08.038684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.535 [2024-12-06 00:48:08.038721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.535 [2024-12-06 00:48:08.052860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.535 [2024-12-06 00:48:08.052902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.535 [2024-12-06 00:48:08.067208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.535 [2024-12-06 00:48:08.067259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.535 [2024-12-06 00:48:08.081468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.535 [2024-12-06 00:48:08.081520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.535 [2024-12-06 00:48:08.095319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.535 [2024-12-06 00:48:08.095356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.535 [2024-12-06 00:48:08.109224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.535 [2024-12-06 00:48:08.109275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.535 [2024-12-06 00:48:08.123433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.535 [2024-12-06 00:48:08.123470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.535 [2024-12-06 00:48:08.137414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.535 [2024-12-06 00:48:08.137459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.535 [2024-12-06 00:48:08.151217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.535 [2024-12-06 00:48:08.151253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.535 [2024-12-06 00:48:08.165757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.535 [2024-12-06 00:48:08.165794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.535 [2024-12-06 00:48:08.179426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.535 [2024-12-06 00:48:08.179490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.535 [2024-12-06 00:48:08.193145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.535 [2024-12-06 00:48:08.193182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.535 [2024-12-06 00:48:08.206896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.535 [2024-12-06 00:48:08.206933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.535 [2024-12-06 00:48:08.221266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.535 [2024-12-06 00:48:08.221305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.535 [2024-12-06 00:48:08.235042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.535 [2024-12-06 00:48:08.235079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.794 [2024-12-06 00:48:08.249301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.794 [2024-12-06 00:48:08.249340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.794 [2024-12-06 00:48:08.264034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.794 [2024-12-06 00:48:08.264071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.794 [2024-12-06 00:48:08.279081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.794 [2024-12-06 00:48:08.279118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.794 [2024-12-06 00:48:08.294248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.794 [2024-12-06 00:48:08.294287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.794 [2024-12-06 00:48:08.308587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.794 [2024-12-06 00:48:08.308627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.794 [2024-12-06 00:48:08.323608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.794 [2024-12-06 00:48:08.323647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.794 [2024-12-06 00:48:08.339569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.794 [2024-12-06 00:48:08.339609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.794 [2024-12-06 00:48:08.354781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.794 [2024-12-06 00:48:08.354832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.794 [2024-12-06 00:48:08.370046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.794 [2024-12-06 00:48:08.370082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.794 [2024-12-06 00:48:08.384589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.794 [2024-12-06 00:48:08.384629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.794 [2024-12-06 00:48:08.399896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.794 [2024-12-06 00:48:08.399932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.794 [2024-12-06 00:48:08.414781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.794 [2024-12-06 00:48:08.414831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.794 [2024-12-06 00:48:08.429561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.794 [2024-12-06 00:48:08.429601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.794 [2024-12-06 00:48:08.444700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.794 [2024-12-06 00:48:08.444743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.794 [2024-12-06 00:48:08.459951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.794 [2024-12-06 00:48:08.460008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.794 [2024-12-06 00:48:08.475441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.794 [2024-12-06 00:48:08.475482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.794 [2024-12-06 00:48:08.490019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.794 [2024-12-06 00:48:08.490055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.053 [2024-12-06 00:48:08.504953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.053 [2024-12-06 00:48:08.504990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.053 8559.67 IOPS, 66.87 MiB/s [2024-12-05T23:48:08.762Z] [2024-12-06 00:48:08.519166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.053 [2024-12-06 00:48:08.519205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.053 [2024-12-06 00:48:08.533505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.053 [2024-12-06 00:48:08.533545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.053 [2024-12-06 00:48:08.548658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.053 [2024-12-06 00:48:08.548696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.053 [2024-12-06 00:48:08.563871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.053 [2024-12-06 00:48:08.563908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.053 [2024-12-06 00:48:08.579803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.053 [2024-12-06 00:48:08.579878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.053 [2024-12-06 00:48:08.595228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.053 [2024-12-06 00:48:08.595280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.053 [2024-12-06 00:48:08.610348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.053 [2024-12-06 00:48:08.610388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.053 [2024-12-06 00:48:08.625515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.053 [2024-12-06 00:48:08.625554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.053 [2024-12-06 00:48:08.639698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.053 [2024-12-06 00:48:08.639737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.053 [2024-12-06 00:48:08.653929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.053 [2024-12-06 00:48:08.653964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.053 [2024-12-06 00:48:08.668476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.053 [2024-12-06 00:48:08.668511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.053 [2024-12-06 00:48:08.683265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.053 [2024-12-06 00:48:08.683305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.053 [2024-12-06 00:48:08.696999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.053 [2024-12-06 00:48:08.697036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.053 [2024-12-06 00:48:08.711893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.053 [2024-12-06 00:48:08.711929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.053 [2024-12-06 00:48:08.726596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.053 [2024-12-06 00:48:08.726633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.053 [2024-12-06 00:48:08.740221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.053 [2024-12-06 00:48:08.740271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.053 [2024-12-06 00:48:08.753983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.053 [2024-12-06 00:48:08.754019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.312 [2024-12-06 00:48:08.769461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.312 [2024-12-06 00:48:08.769501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.312 [2024-12-06 00:48:08.785091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.312 [2024-12-06 00:48:08.785147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.312 [2024-12-06 00:48:08.800546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.312 [2024-12-06 00:48:08.800586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.312 [2024-12-06 00:48:08.815651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.312 [2024-12-06 00:48:08.815690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.312 [2024-12-06 00:48:08.831011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.312 [2024-12-06 00:48:08.831047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.312 [2024-12-06 00:48:08.846464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.312 [2024-12-06 00:48:08.846503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.312 [2024-12-06 00:48:08.862348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.312 [2024-12-06 00:48:08.862390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.312 [2024-12-06 00:48:08.877240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.312 [2024-12-06 00:48:08.877281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.312 [2024-12-06 00:48:08.892267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.312 [2024-12-06 00:48:08.892308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.312 [2024-12-06 00:48:08.907884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.312 [2024-12-06 00:48:08.907920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.312 [2024-12-06 00:48:08.923295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.312 [2024-12-06 00:48:08.923336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.312 [2024-12-06 00:48:08.938756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.312 [2024-12-06 00:48:08.938797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.312 [2024-12-06 00:48:08.953806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.312 [2024-12-06 00:48:08.953870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.312 [2024-12-06 00:48:08.968802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.312 [2024-12-06 00:48:08.968877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.312 [2024-12-06 00:48:08.983894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.312 [2024-12-06 00:48:08.983930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.312 [2024-12-06 00:48:08.998432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.312 [2024-12-06 00:48:08.998472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.312 [2024-12-06 00:48:09.013747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.312 [2024-12-06 00:48:09.013787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.571 [2024-12-06 00:48:09.028898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.571 [2024-12-06 00:48:09.028935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.571 [2024-12-06 00:48:09.044375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.571 [2024-12-06 00:48:09.044416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.571 [2024-12-06 00:48:09.059179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.571 [2024-12-06 00:48:09.059219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.571 [2024-12-06 00:48:09.075574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.571 [2024-12-06 00:48:09.075613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.571 [2024-12-06 00:48:09.091174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.571 [2024-12-06 00:48:09.091215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.571 [2024-12-06 00:48:09.106359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.571 [2024-12-06 00:48:09.106399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.571 [2024-12-06 00:48:09.121764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.571 [2024-12-06 00:48:09.121804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.571 [2024-12-06 00:48:09.136782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.571 [2024-12-06 00:48:09.136833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.571 [2024-12-06 00:48:09.152019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.571 [2024-12-06 00:48:09.152056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.571 [2024-12-06 00:48:09.166894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.571 [2024-12-06 00:48:09.166930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.571 [2024-12-06 00:48:09.181961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.571 [2024-12-06 00:48:09.181998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.571 [2024-12-06 00:48:09.197169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.571 [2024-12-06 00:48:09.197209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.571 [2024-12-06 00:48:09.212054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.571 [2024-12-06 00:48:09.212090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.571 [2024-12-06 00:48:09.227277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.571 [2024-12-06 00:48:09.227317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.572 [2024-12-06 00:48:09.240052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.572 [2024-12-06 00:48:09.240112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.572 [2024-12-06 00:48:09.255426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.572 [2024-12-06 00:48:09.255467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.572 [2024-12-06 00:48:09.269585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.572 [2024-12-06 00:48:09.269626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.856 [2024-12-06 00:48:09.285911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.856 [2024-12-06 00:48:09.285949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.856 [2024-12-06 00:48:09.300958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.856 [2024-12-06 00:48:09.300994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.856 [2024-12-06 00:48:09.314735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.856 [2024-12-06 00:48:09.314786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.856 [2024-12-06 00:48:09.329294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.856 [2024-12-06 00:48:09.329336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.856 [2024-12-06 00:48:09.344921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.856 [2024-12-06 00:48:09.344958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.856 [2024-12-06 00:48:09.359759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.856 [2024-12-06 00:48:09.359801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.856 [2024-12-06 00:48:09.375487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.856 [2024-12-06 00:48:09.375527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.856 [2024-12-06 00:48:09.390992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.856 [2024-12-06 00:48:09.391028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.856 [2024-12-06 00:48:09.406058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.856 [2024-12-06 00:48:09.406119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.856 [2024-12-06 00:48:09.422045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.856 [2024-12-06 00:48:09.422082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.856 [2024-12-06 00:48:09.438445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.856 [2024-12-06 00:48:09.438486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.856 [2024-12-06 00:48:09.453478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.856 [2024-12-06 00:48:09.453518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.856 [2024-12-06 00:48:09.469657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.856 [2024-12-06 00:48:09.469699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.856 [2024-12-06 00:48:09.485541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.856 [2024-12-06 00:48:09.485581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.856 [2024-12-06 00:48:09.501047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.856 [2024-12-06 00:48:09.501083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.856 [2024-12-06 00:48:09.514017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.856 [2024-12-06 00:48:09.514054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.856 8512.50 IOPS, 66.50 MiB/s [2024-12-05T23:48:09.565Z] [2024-12-06 00:48:09.529182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.856 [2024-12-06 00:48:09.529222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.856 [2024-12-06 00:48:09.544514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.856 [2024-12-06 00:48:09.544554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.856 [2024-12-06 00:48:09.559978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.856 [2024-12-06 00:48:09.560026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.116 [2024-12-06 00:48:09.574491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.116 [2024-12-06 00:48:09.574531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.116 [2024-12-06 00:48:09.589984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.116 [2024-12-06 00:48:09.590027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.116 [2024-12-06 00:48:09.605390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.116 [2024-12-06 00:48:09.605430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.116 [2024-12-06 00:48:09.621244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.116 [2024-12-06 00:48:09.621285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.116 [2024-12-06 00:48:09.636569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.116 [2024-12-06 00:48:09.636609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.116 [2024-12-06 00:48:09.651430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.116 [2024-12-06 00:48:09.651470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.116 [2024-12-06 00:48:09.666501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.116 [2024-12-06 00:48:09.666541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.116 [2024-12-06 00:48:09.681992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.116 [2024-12-06 00:48:09.682029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.116 [2024-12-06 00:48:09.697029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.116 [2024-12-06 00:48:09.697065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.116 [2024-12-06 00:48:09.710051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.116 [2024-12-06 00:48:09.710102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.116 [2024-12-06 00:48:09.724385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.116 [2024-12-06 00:48:09.724425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.116 [2024-12-06 00:48:09.739597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.116 [2024-12-06 00:48:09.739637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.116 [2024-12-06 00:48:09.755315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.116 [2024-12-06 00:48:09.755356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.116 [2024-12-06 00:48:09.770127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.116 [2024-12-06 00:48:09.770168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.116 [2024-12-06 00:48:09.785602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.116 [2024-12-06 00:48:09.785643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.116 [2024-12-06 00:48:09.801092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.116 [2024-12-06 00:48:09.801149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.116 [2024-12-06 00:48:09.816445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.116 [2024-12-06 00:48:09.816484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.375 [2024-12-06 00:48:09.831982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.375 [2024-12-06 00:48:09.832019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.375 [2024-12-06 00:48:09.846686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.375 [2024-12-06 00:48:09.846726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.375 [2024-12-06 00:48:09.862131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.375 [2024-12-06 00:48:09.862173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.375 [2024-12-06 00:48:09.877212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.375 [2024-12-06 00:48:09.877262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.375 [2024-12-06 00:48:09.892348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.375 [2024-12-06 00:48:09.892389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.375 [2024-12-06 00:48:09.908355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.375 [2024-12-06 00:48:09.908396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.375 [2024-12-06 00:48:09.923509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.375 [2024-12-06 00:48:09.923550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.375 [2024-12-06 00:48:09.938218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.375 [2024-12-06 00:48:09.938258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.375 [2024-12-06 00:48:09.953722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.375 [2024-12-06 00:48:09.953763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.375 [2024-12-06 00:48:09.969051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.375 [2024-12-06 00:48:09.969104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.375 [2024-12-06 00:48:09.984274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.375 [2024-12-06 00:48:09.984315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.375 [2024-12-06 00:48:09.997178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.375 [2024-12-06 00:48:09.997218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.375 [2024-12-06 00:48:10.012591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.375 [2024-12-06 00:48:10.012640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.376 [2024-12-06 00:48:10.027806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.376 [2024-12-06 00:48:10.027872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.376 [2024-12-06 00:48:10.043926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.376 [2024-12-06 00:48:10.043968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.376 [2024-12-06 00:48:10.058942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.376 [2024-12-06 00:48:10.058979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.376 [2024-12-06 00:48:10.074830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.376 [2024-12-06 00:48:10.074886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.633 [2024-12-06 00:48:10.090061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.633 [2024-12-06 00:48:10.090114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.633 [2024-12-06 00:48:10.105235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.633 [2024-12-06 00:48:10.105276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.633 [2024-12-06 00:48:10.120580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.633 [2024-12-06 00:48:10.120620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.633 [2024-12-06 00:48:10.135748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.633 [2024-12-06 00:48:10.135788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.633 [2024-12-06 00:48:10.150799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.634 [2024-12-06 00:48:10.150863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.634 [2024-12-06 00:48:10.165874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.634 [2024-12-06 00:48:10.165923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.634 [2024-12-06 00:48:10.180442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.634 [2024-12-06 00:48:10.180482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.634 [2024-12-06 00:48:10.196227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.634 [2024-12-06 00:48:10.196263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.634 [2024-12-06 00:48:10.211223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.634 [2024-12-06 00:48:10.211264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.634 [2024-12-06 00:48:10.226726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.634 [2024-12-06 00:48:10.226765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.634 [2024-12-06 00:48:10.242153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.634 [2024-12-06 00:48:10.242194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.634 [2024-12-06 00:48:10.257802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.634 [2024-12-06 00:48:10.257866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.634 [2024-12-06 00:48:10.273021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.634 [2024-12-06 00:48:10.273058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.634 [2024-12-06 00:48:10.286902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.634 [2024-12-06 00:48:10.286939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.634 [2024-12-06 00:48:10.301761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.634 [2024-12-06 00:48:10.301801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.634 [2024-12-06 00:48:10.316652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.634 [2024-12-06 00:48:10.316691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.634 [2024-12-06 00:48:10.331507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.634 [2024-12-06 00:48:10.331547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 [2024-12-06 00:48:10.346278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.346319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 [2024-12-06 00:48:10.361211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.361253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 [2024-12-06 00:48:10.376621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.376662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 [2024-12-06 00:48:10.388705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.388745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 [2024-12-06 00:48:10.403091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.403146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 [2024-12-06 00:48:10.418131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.418172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 [2024-12-06 00:48:10.432915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.432952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 [2024-12-06 00:48:10.448186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.448246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 [2024-12-06 00:48:10.462966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.463004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 [2024-12-06 00:48:10.477739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.477780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 [2024-12-06 00:48:10.493152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.493193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 [2024-12-06 00:48:10.506143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.506184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 8491.60 IOPS, 66.34 MiB/s [2024-12-05T23:48:10.601Z] [2024-12-06 00:48:10.521232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.521286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 [2024-12-06 00:48:10.532236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.532276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 00:10:37.892 Latency(us) 00:10:37.892 [2024-12-05T23:48:10.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:37.892 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:37.892 Nvme1n1 : 5.02 8490.33 66.33 0.00 0.00 15049.70 5461.33 25049.32 00:10:37.892 [2024-12-05T23:48:10.601Z] =================================================================================================================== 00:10:37.892 [2024-12-05T23:48:10.601Z] Total : 8490.33 66.33 0.00 0.00 15049.70 5461.33 25049.32 00:10:37.892 [2024-12-06 00:48:10.537004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.537052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 [2024-12-06 00:48:10.544931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.544962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 [2024-12-06 00:48:10.552967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.552999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 [2024-12-06 00:48:10.560968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.560998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 [2024-12-06 00:48:10.568977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.569006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 [2024-12-06 00:48:10.576998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.577027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 [2024-12-06 00:48:10.585156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.585220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.892 [2024-12-06 00:48:10.593144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.892 [2024-12-06 00:48:10.593206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.601205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.601256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.609116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.609150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.617166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.617201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.625236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.625271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.633237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.633271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.641275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.641309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.649298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.649332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.657304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.657337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.665347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.665381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.673354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.673388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.681430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.681478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.689532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.689594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.697534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.697611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.705465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.705499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.713502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.713536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.721493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.721527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.729529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.729562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.737533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.737566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.745573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.745607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.753599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.753642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.761602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.761634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.769667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.769701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.777667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.777701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.785712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.785748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.150 [2024-12-06 00:48:10.793741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.150 [2024-12-06 00:48:10.793777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.151 [2024-12-06 00:48:10.801724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.151 [2024-12-06 00:48:10.801758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.151 [2024-12-06 00:48:10.809768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.151 [2024-12-06 00:48:10.809803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.151 [2024-12-06 00:48:10.817799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.151 [2024-12-06 00:48:10.817858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.151 [2024-12-06 00:48:10.825793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.151 [2024-12-06 00:48:10.825836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.151 [2024-12-06 00:48:10.833841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.151 [2024-12-06 00:48:10.833889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.151 [2024-12-06 00:48:10.841884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.151 [2024-12-06 00:48:10.841915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.151 [2024-12-06 00:48:10.849880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.151 [2024-12-06 00:48:10.849909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.151 [2024-12-06 00:48:10.857925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.151 [2024-12-06 00:48:10.857969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:10.870147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:10.870232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:10.877987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:10.878017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:10.885981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:10.886011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:10.894005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:10.894034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:10.902012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:10.902042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:10.910028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:10.910066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:10.918063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:10.918101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:10.926221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:10.926286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:10.934207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:10.934270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:10.942251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:10.942315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:10.950170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:10.950207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:10.958166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:10.958201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:10.966222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:10.966262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:10.974236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:10.974270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:10.982260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:10.982293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:10.990289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:10.990322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:10.998288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:10.998322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:11.006336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:11.006370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:11.014361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:11.014395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:11.022366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:11.022399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:11.030401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:11.030434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:11.038428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:11.038476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:11.046436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:11.046471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:11.054484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:11.054519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:11.062484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:11.062530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:11.070519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:11.070553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:11.078543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:11.078577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:11.086572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:11.086605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.410 [2024-12-06 00:48:11.094567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.410 [2024-12-06 00:48:11.094596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.411 [2024-12-06 00:48:11.102721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.411 [2024-12-06 00:48:11.102794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.411 [2024-12-06 00:48:11.110680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.411 [2024-12-06 00:48:11.110736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.118668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.118702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.126658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.126691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.134716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.134749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.142725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.142758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.150737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.150770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.158775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.158809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.166802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.166870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.174801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.174858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.182885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.182915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.190873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.190903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.198904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.198934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.206923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.206953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.214942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.214982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.223034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.223091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.231048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.231112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.238984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.239013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.247023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.247052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.255024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.255053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.263060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.263103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.271079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.271128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.279132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.279166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.287149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.287183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.295174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.295211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.303184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.303218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.311199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.311230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.319307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.319392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.327289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.327323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.335293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.335326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.343297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.343331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.351341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.351376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.359364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.359398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.367369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.670 [2024-12-06 00:48:11.367414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.670 [2024-12-06 00:48:11.375423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.671 [2024-12-06 00:48:11.375457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.929 [2024-12-06 00:48:11.383418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.929 [2024-12-06 00:48:11.383453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.929 [2024-12-06 00:48:11.391459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.929 [2024-12-06 00:48:11.391493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.929 [2024-12-06 00:48:11.399471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.929 [2024-12-06 00:48:11.399504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.929 [2024-12-06 00:48:11.407481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.929 [2024-12-06 00:48:11.407516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.929 [2024-12-06 00:48:11.415535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.929 [2024-12-06 00:48:11.415575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.929 [2024-12-06 00:48:11.423554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.929 [2024-12-06 00:48:11.423589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.929 [2024-12-06 00:48:11.431560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.929 [2024-12-06 00:48:11.431597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.929 [2024-12-06 00:48:11.439597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.929 [2024-12-06 00:48:11.439632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.929 [2024-12-06 00:48:11.447601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.929 [2024-12-06 00:48:11.447636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.929 [2024-12-06 00:48:11.455637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.929 [2024-12-06 00:48:11.455672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.929 [2024-12-06 00:48:11.463657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.929 [2024-12-06 00:48:11.463691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2876102) - No such process 00:10:38.929 00:48:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2876102 00:10:38.929 00:48:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.929 00:48:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.929 00:48:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:38.929 00:48:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.929 00:48:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:38.929 00:48:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.929 00:48:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:38.929 delay0 00:10:38.929 00:48:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.929 00:48:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:38.929 00:48:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.929 00:48:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:38.929 00:48:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.929 00:48:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:39.186 [2024-12-06 00:48:11.647970] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:47.294 Initializing NVMe Controllers 00:10:47.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:47.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:47.294 Initialization complete. Launching workers. 00:10:47.294 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 242, failed: 15821 00:10:47.294 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 15945, failed to submit 118 00:10:47.294 success 15857, unsuccessful 88, failed 0 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:47.294 rmmod nvme_tcp 00:10:47.294 rmmod nvme_fabrics 00:10:47.294 rmmod nvme_keyring 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2874499 ']' 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2874499 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2874499 ']' 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2874499 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2874499 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2874499' 00:10:47.294 killing process with pid 2874499 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2874499 00:10:47.294 00:48:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2874499 00:10:47.863 00:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:47.863 00:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:47.863 00:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:47.863 00:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:47.863 00:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:47.863 00:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:47.863 00:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:47.863 00:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:47.863 00:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:47.863 00:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.863 00:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.863 00:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.770 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:49.770 00:10:49.770 real 0m33.274s 00:10:49.770 user 0m49.734s 00:10:49.770 sys 0m8.917s 00:10:49.770 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.770 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.770 ************************************ 00:10:49.770 END TEST nvmf_zcopy 00:10:49.770 ************************************ 00:10:49.770 00:48:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:49.770 00:48:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:49.770 00:48:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.770 00:48:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:49.770 ************************************ 00:10:49.770 START TEST nvmf_nmic 00:10:49.770 ************************************ 00:10:49.770 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:49.770 * Looking for test storage... 00:10:49.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.770 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:49.770 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:49.770 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:50.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.030 --rc genhtml_branch_coverage=1 00:10:50.030 --rc genhtml_function_coverage=1 00:10:50.030 --rc genhtml_legend=1 00:10:50.030 --rc geninfo_all_blocks=1 00:10:50.030 --rc geninfo_unexecuted_blocks=1 00:10:50.030 00:10:50.030 ' 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:50.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.030 --rc genhtml_branch_coverage=1 00:10:50.030 --rc genhtml_function_coverage=1 00:10:50.030 --rc genhtml_legend=1 00:10:50.030 --rc geninfo_all_blocks=1 00:10:50.030 --rc geninfo_unexecuted_blocks=1 00:10:50.030 00:10:50.030 ' 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:50.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.030 --rc genhtml_branch_coverage=1 00:10:50.030 --rc genhtml_function_coverage=1 00:10:50.030 --rc genhtml_legend=1 00:10:50.030 --rc geninfo_all_blocks=1 00:10:50.030 --rc geninfo_unexecuted_blocks=1 00:10:50.030 00:10:50.030 ' 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:50.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.030 --rc genhtml_branch_coverage=1 00:10:50.030 --rc genhtml_function_coverage=1 00:10:50.030 --rc genhtml_legend=1 00:10:50.030 --rc geninfo_all_blocks=1 00:10:50.030 --rc geninfo_unexecuted_blocks=1 00:10:50.030 00:10:50.030 ' 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.030 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:50.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:50.031 00:48:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:52.565 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:52.565 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:52.565 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:52.566 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:52.566 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:52.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:52.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:10:52.566 00:10:52.566 --- 10.0.0.2 ping statistics --- 00:10:52.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.566 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:52.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:52.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:10:52.566 00:10:52.566 --- 10.0.0.1 ping statistics --- 00:10:52.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.566 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2880403 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2880403 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2880403 ']' 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.566 00:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.566 [2024-12-06 00:48:24.952562] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:10:52.566 [2024-12-06 00:48:24.952699] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.566 [2024-12-06 00:48:25.106788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.566 [2024-12-06 00:48:25.254170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.566 [2024-12-06 00:48:25.254252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.566 [2024-12-06 00:48:25.254278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.566 [2024-12-06 00:48:25.254302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.566 [2024-12-06 00:48:25.254322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.566 [2024-12-06 00:48:25.257263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.566 [2024-12-06 00:48:25.257343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.566 [2024-12-06 00:48:25.257404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.566 [2024-12-06 00:48:25.257406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.498 00:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.498 00:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:53.498 00:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:53.498 00:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:53.498 00:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.498 00:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.498 00:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:53.498 00:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.498 00:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.498 [2024-12-06 00:48:25.977370] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.498 00:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.498 00:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:53.498 00:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.498 00:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.498 Malloc0 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.498 [2024-12-06 00:48:26.093129] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:53.498 test case1: single bdev can't be used in multiple subsystems 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.498 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.498 [2024-12-06 00:48:26.116791] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:53.498 [2024-12-06 00:48:26.116880] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:53.498 [2024-12-06 00:48:26.116910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.498 request: 00:10:53.498 { 00:10:53.498 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:53.498 "namespace": { 00:10:53.498 "bdev_name": "Malloc0", 00:10:53.498 "no_auto_visible": false, 00:10:53.498 "hide_metadata": false 00:10:53.498 }, 00:10:53.499 "method": "nvmf_subsystem_add_ns", 00:10:53.499 "req_id": 1 00:10:53.499 } 00:10:53.499 Got JSON-RPC error response 00:10:53.499 response: 00:10:53.499 { 00:10:53.499 "code": -32602, 00:10:53.499 "message": "Invalid parameters" 00:10:53.499 } 00:10:53.499 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:53.499 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:53.499 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:53.499 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:53.499 Adding namespace failed - expected result. 00:10:53.499 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:53.499 test case2: host connect to nvmf target in multiple paths 00:10:53.499 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:53.499 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.499 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.499 [2024-12-06 00:48:26.124966] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:53.499 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.499 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:54.428 00:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:54.990 00:48:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:54.990 00:48:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:54.990 00:48:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:54.990 00:48:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:54.990 00:48:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:56.921 00:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:56.921 00:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:56.921 00:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:56.921 00:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:56.921 00:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:56.921 00:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:56.921 00:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:56.921 [global] 00:10:56.921 thread=1 00:10:56.921 invalidate=1 00:10:56.921 rw=write 00:10:56.921 time_based=1 00:10:56.921 runtime=1 00:10:56.921 ioengine=libaio 00:10:56.921 direct=1 00:10:56.921 bs=4096 00:10:56.921 iodepth=1 00:10:56.921 norandommap=0 00:10:56.921 numjobs=1 00:10:56.921 00:10:56.921 verify_dump=1 00:10:56.921 verify_backlog=512 00:10:56.921 verify_state_save=0 00:10:56.921 do_verify=1 00:10:56.921 verify=crc32c-intel 00:10:56.921 [job0] 00:10:56.921 filename=/dev/nvme0n1 00:10:56.921 Could not set queue depth (nvme0n1) 00:10:57.178 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.178 fio-3.35 00:10:57.178 Starting 1 thread 00:10:58.642 00:10:58.642 job0: (groupid=0, jobs=1): err= 0: pid=2881048: Fri Dec 6 00:48:30 2024 00:10:58.642 read: IOPS=21, BW=86.8KiB/s (88.9kB/s)(88.0KiB/1014msec) 00:10:58.642 slat (nsec): min=12480, max=33542, avg=24606.82, stdev=9555.44 00:10:58.642 clat (usec): min=40916, max=41986, avg=41243.67, stdev=438.07 00:10:58.642 lat (usec): min=40949, max=42020, avg=41268.27, stdev=443.27 00:10:58.642 clat percentiles (usec): 00:10:58.642 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:58.642 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:58.642 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:58.642 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:58.642 | 99.99th=[42206] 00:10:58.642 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:10:58.642 slat (nsec): min=7031, max=45320, avg=13691.14, stdev=7634.77 00:10:58.642 clat (usec): min=160, max=334, avg=189.26, stdev=21.11 00:10:58.642 lat (usec): min=169, max=368, avg=202.95, stdev=25.22 00:10:58.642 clat percentiles (usec): 00:10:58.642 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:10:58.642 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:10:58.642 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 243], 00:10:58.642 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 334], 99.95th=[ 334], 00:10:58.642 | 99.99th=[ 334] 00:10:58.642 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:58.642 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:58.642 lat (usec) : 250=92.88%, 500=3.00% 00:10:58.642 lat (msec) : 50=4.12% 00:10:58.642 cpu : usr=0.20%, sys=0.79%, ctx=534, majf=0, minf=1 00:10:58.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.642 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.642 00:10:58.642 Run status group 0 (all jobs): 00:10:58.642 READ: bw=86.8KiB/s (88.9kB/s), 86.8KiB/s-86.8KiB/s (88.9kB/s-88.9kB/s), io=88.0KiB (90.1kB), run=1014-1014msec 00:10:58.642 WRITE: bw=2020KiB/s (2068kB/s), 2020KiB/s-2020KiB/s (2068kB/s-2068kB/s), io=2048KiB (2097kB), run=1014-1014msec 00:10:58.642 00:10:58.642 Disk stats (read/write): 00:10:58.642 nvme0n1: ios=69/512, merge=0/0, ticks=814/95, in_queue=909, util=91.88% 00:10:58.642 00:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:58.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:58.642 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:58.642 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:58.642 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:58.642 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.642 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:58.642 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.642 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:58.642 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:58.642 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:58.642 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:58.643 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:58.643 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:58.643 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:58.643 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:58.643 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:58.643 rmmod nvme_tcp 00:10:58.643 rmmod nvme_fabrics 00:10:58.643 rmmod nvme_keyring 00:10:58.643 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:58.643 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:58.643 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:58.643 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2880403 ']' 00:10:58.643 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2880403 00:10:58.643 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2880403 ']' 00:10:58.643 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2880403 00:10:58.643 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:58.643 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.643 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2880403 00:10:58.643 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:58.643 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:58.643 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2880403' 00:10:58.643 killing process with pid 2880403 00:10:58.643 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2880403 00:10:58.643 00:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2880403 00:11:00.018 00:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:00.018 00:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:00.018 00:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:00.018 00:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:00.018 00:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:00.018 00:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:00.018 00:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:00.018 00:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:00.018 00:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:00.018 00:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.018 00:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.018 00:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:02.553 00:11:02.553 real 0m12.288s 00:11:02.553 user 0m29.322s 00:11:02.553 sys 0m2.740s 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:02.553 ************************************ 00:11:02.553 END TEST nvmf_nmic 00:11:02.553 ************************************ 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:02.553 ************************************ 00:11:02.553 START TEST nvmf_fio_target 00:11:02.553 ************************************ 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:02.553 * Looking for test storage... 00:11:02.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:02.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.553 --rc genhtml_branch_coverage=1 00:11:02.553 --rc genhtml_function_coverage=1 00:11:02.553 --rc genhtml_legend=1 00:11:02.553 --rc geninfo_all_blocks=1 00:11:02.553 --rc geninfo_unexecuted_blocks=1 00:11:02.553 00:11:02.553 ' 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:02.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.553 --rc genhtml_branch_coverage=1 00:11:02.553 --rc genhtml_function_coverage=1 00:11:02.553 --rc genhtml_legend=1 00:11:02.553 --rc geninfo_all_blocks=1 00:11:02.553 --rc geninfo_unexecuted_blocks=1 00:11:02.553 00:11:02.553 ' 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:02.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.553 --rc genhtml_branch_coverage=1 00:11:02.553 --rc genhtml_function_coverage=1 00:11:02.553 --rc genhtml_legend=1 00:11:02.553 --rc geninfo_all_blocks=1 00:11:02.553 --rc geninfo_unexecuted_blocks=1 00:11:02.553 00:11:02.553 ' 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:02.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.553 --rc genhtml_branch_coverage=1 00:11:02.553 --rc genhtml_function_coverage=1 00:11:02.553 --rc genhtml_legend=1 00:11:02.553 --rc geninfo_all_blocks=1 00:11:02.553 --rc geninfo_unexecuted_blocks=1 00:11:02.553 00:11:02.553 ' 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.553 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:02.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:02.554 00:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.452 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.452 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:04.452 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:04.452 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:04.452 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:04.452 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:04.452 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:04.452 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:04.452 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:04.452 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:04.452 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:04.452 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:04.452 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:04.452 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:04.452 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:04.452 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.452 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.452 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.452 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.452 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:04.453 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:04.453 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:04.453 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:04.453 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.453 00:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:04.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:11:04.453 00:11:04.453 --- 10.0.0.2 ping statistics --- 00:11:04.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.453 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:11:04.453 00:11:04.453 --- 10.0.0.1 ping statistics --- 00:11:04.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.453 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2883391 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2883391 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2883391 ']' 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.453 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.454 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.454 00:48:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.712 [2024-12-06 00:48:37.185735] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:11:04.712 [2024-12-06 00:48:37.185923] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.712 [2024-12-06 00:48:37.330560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.970 [2024-12-06 00:48:37.461368] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.970 [2024-12-06 00:48:37.461441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.970 [2024-12-06 00:48:37.461466] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.970 [2024-12-06 00:48:37.461490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.970 [2024-12-06 00:48:37.461509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.970 [2024-12-06 00:48:37.464282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.970 [2024-12-06 00:48:37.464348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.970 [2024-12-06 00:48:37.464437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.970 [2024-12-06 00:48:37.464442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.536 00:48:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.536 00:48:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:05.536 00:48:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:05.536 00:48:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:05.536 00:48:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.536 00:48:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.536 00:48:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:06.101 [2024-12-06 00:48:38.528366] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.101 00:48:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:06.359 00:48:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:06.359 00:48:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:06.618 00:48:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:06.618 00:48:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.185 00:48:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:07.185 00:48:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.443 00:48:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:07.443 00:48:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:07.700 00:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.958 00:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:07.958 00:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:08.216 00:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:08.216 00:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:08.782 00:48:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:08.782 00:48:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:08.782 00:48:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:09.346 00:48:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:09.346 00:48:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.346 00:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:09.346 00:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:09.912 00:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.912 [2024-12-06 00:48:42.567371] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.912 00:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:10.170 00:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:10.428 00:48:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:11.361 00:48:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:11.361 00:48:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:11.361 00:48:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:11.361 00:48:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:11.361 00:48:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:11.361 00:48:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:13.257 00:48:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:13.257 00:48:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:13.257 00:48:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:13.258 00:48:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:13.258 00:48:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:13.258 00:48:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:13.258 00:48:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:13.258 [global] 00:11:13.258 thread=1 00:11:13.258 invalidate=1 00:11:13.258 rw=write 00:11:13.258 time_based=1 00:11:13.258 runtime=1 00:11:13.258 ioengine=libaio 00:11:13.258 direct=1 00:11:13.258 bs=4096 00:11:13.258 iodepth=1 00:11:13.258 norandommap=0 00:11:13.258 numjobs=1 00:11:13.258 00:11:13.258 verify_dump=1 00:11:13.258 verify_backlog=512 00:11:13.258 verify_state_save=0 00:11:13.258 do_verify=1 00:11:13.258 verify=crc32c-intel 00:11:13.258 [job0] 00:11:13.258 filename=/dev/nvme0n1 00:11:13.258 [job1] 00:11:13.258 filename=/dev/nvme0n2 00:11:13.258 [job2] 00:11:13.258 filename=/dev/nvme0n3 00:11:13.258 [job3] 00:11:13.258 filename=/dev/nvme0n4 00:11:13.258 Could not set queue depth (nvme0n1) 00:11:13.258 Could not set queue depth (nvme0n2) 00:11:13.258 Could not set queue depth (nvme0n3) 00:11:13.258 Could not set queue depth (nvme0n4) 00:11:13.516 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.516 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.516 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.516 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.516 fio-3.35 00:11:13.516 Starting 4 threads 00:11:14.891 00:11:14.891 job0: (groupid=0, jobs=1): err= 0: pid=2884607: Fri Dec 6 00:48:47 2024 00:11:14.891 read: IOPS=1513, BW=6055KiB/s (6200kB/s)(6152KiB/1016msec) 00:11:14.891 slat (nsec): min=4583, max=54407, avg=10680.91, stdev=6268.57 00:11:14.891 clat (usec): min=225, max=40783, avg=308.67, stdev=1160.05 00:11:14.891 lat (usec): min=230, max=40801, avg=319.35, stdev=1160.35 00:11:14.891 clat percentiles (usec): 00:11:14.891 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:11:14.891 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 265], 00:11:14.891 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 359], 00:11:14.891 | 99.00th=[ 396], 99.50th=[ 404], 99.90th=[20841], 99.95th=[40633], 00:11:14.891 | 99.99th=[40633] 00:11:14.891 write: IOPS=2015, BW=8063KiB/s (8257kB/s)(8192KiB/1016msec); 0 zone resets 00:11:14.891 slat (usec): min=5, max=41634, avg=45.92, stdev=1078.23 00:11:14.891 clat (usec): min=166, max=414, avg=203.60, stdev=23.38 00:11:14.891 lat (usec): min=174, max=41856, avg=249.53, stdev=1079.75 00:11:14.891 clat percentiles (usec): 00:11:14.891 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 186], 00:11:14.891 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 206], 00:11:14.891 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 241], 00:11:14.891 | 99.00th=[ 289], 99.50th=[ 318], 99.90th=[ 359], 99.95th=[ 388], 00:11:14.891 | 99.99th=[ 416] 00:11:14.891 bw ( KiB/s): min= 8192, max= 8192, per=51.95%, avg=8192.00, stdev= 0.00, samples=2 00:11:14.891 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:11:14.891 lat (usec) : 250=67.85%, 500=32.10% 00:11:14.891 lat (msec) : 50=0.06% 00:11:14.891 cpu : usr=1.67%, sys=5.12%, ctx=3590, majf=0, minf=1 00:11:14.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.891 issued rwts: total=1538,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.891 job1: (groupid=0, jobs=1): err= 0: pid=2884608: Fri Dec 6 00:48:47 2024 00:11:14.891 read: IOPS=19, BW=79.1KiB/s (81.0kB/s)(80.0KiB/1011msec) 00:11:14.891 slat (nsec): min=13687, max=35130, avg=18851.00, stdev=6653.07 00:11:14.891 clat (usec): min=40798, max=42029, avg=41171.03, stdev=421.92 00:11:14.891 lat (usec): min=40823, max=42043, avg=41189.88, stdev=422.27 00:11:14.891 clat percentiles (usec): 00:11:14.891 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:14.891 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:14.891 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:11:14.891 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:14.891 | 99.99th=[42206] 00:11:14.891 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:11:14.891 slat (nsec): min=6886, max=74785, avg=24873.54, stdev=13396.48 00:11:14.891 clat (usec): min=187, max=563, avg=331.94, stdev=75.09 00:11:14.891 lat (usec): min=216, max=584, avg=356.81, stdev=70.98 00:11:14.891 clat percentiles (usec): 00:11:14.891 | 1.00th=[ 200], 5.00th=[ 221], 10.00th=[ 233], 20.00th=[ 255], 00:11:14.892 | 30.00th=[ 277], 40.00th=[ 297], 50.00th=[ 326], 60.00th=[ 371], 00:11:14.892 | 70.00th=[ 383], 80.00th=[ 408], 90.00th=[ 429], 95.00th=[ 445], 00:11:14.892 | 99.00th=[ 465], 99.50th=[ 478], 99.90th=[ 562], 99.95th=[ 562], 00:11:14.892 | 99.99th=[ 562] 00:11:14.892 bw ( KiB/s): min= 4096, max= 4096, per=25.98%, avg=4096.00, stdev= 0.00, samples=1 00:11:14.892 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:14.892 lat (usec) : 250=17.29%, 500=78.57%, 750=0.38% 00:11:14.892 lat (msec) : 50=3.76% 00:11:14.892 cpu : usr=0.69%, sys=1.19%, ctx=533, majf=0, minf=1 00:11:14.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.892 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.892 job2: (groupid=0, jobs=1): err= 0: pid=2884609: Fri Dec 6 00:48:47 2024 00:11:14.892 read: IOPS=541, BW=2165KiB/s (2217kB/s)(2180KiB/1007msec) 00:11:14.892 slat (nsec): min=4440, max=44076, avg=8091.87, stdev=5431.95 00:11:14.892 clat (usec): min=218, max=42066, avg=1232.78, stdev=6121.32 00:11:14.892 lat (usec): min=224, max=42086, avg=1240.88, stdev=6123.35 00:11:14.892 clat percentiles (usec): 00:11:14.892 | 1.00th=[ 225], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 237], 00:11:14.892 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 260], 00:11:14.892 | 70.00th=[ 269], 80.00th=[ 322], 90.00th=[ 408], 95.00th=[ 494], 00:11:14.892 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:11:14.892 | 99.99th=[42206] 00:11:14.892 write: IOPS=1016, BW=4068KiB/s (4165kB/s)(4096KiB/1007msec); 0 zone resets 00:11:14.892 slat (nsec): min=6154, max=69425, avg=17880.83, stdev=12720.51 00:11:14.892 clat (usec): min=174, max=542, avg=297.08, stdev=73.16 00:11:14.892 lat (usec): min=181, max=586, avg=314.96, stdev=77.32 00:11:14.892 clat percentiles (usec): 00:11:14.892 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 204], 20.00th=[ 239], 00:11:14.892 | 30.00th=[ 255], 40.00th=[ 269], 50.00th=[ 285], 60.00th=[ 306], 00:11:14.892 | 70.00th=[ 322], 80.00th=[ 371], 90.00th=[ 404], 95.00th=[ 433], 00:11:14.892 | 99.00th=[ 469], 99.50th=[ 486], 99.90th=[ 537], 99.95th=[ 545], 00:11:14.892 | 99.99th=[ 545] 00:11:14.892 bw ( KiB/s): min= 4016, max= 4176, per=25.98%, avg=4096.00, stdev=113.14, samples=2 00:11:14.892 iops : min= 1004, max= 1044, avg=1024.00, stdev=28.28, samples=2 00:11:14.892 lat (usec) : 250=32.70%, 500=65.46%, 750=0.96%, 1000=0.06% 00:11:14.892 lat (msec) : 50=0.83% 00:11:14.892 cpu : usr=2.29%, sys=1.79%, ctx=1570, majf=0, minf=1 00:11:14.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.892 issued rwts: total=545,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.892 job3: (groupid=0, jobs=1): err= 0: pid=2884610: Fri Dec 6 00:48:47 2024 00:11:14.892 read: IOPS=26, BW=108KiB/s (110kB/s)(112KiB/1039msec) 00:11:14.892 slat (nsec): min=7180, max=34745, avg=15830.36, stdev=6516.37 00:11:14.892 clat (usec): min=312, max=42059, avg=32360.71, stdev=17016.76 00:11:14.892 lat (usec): min=329, max=42076, avg=32376.54, stdev=17018.41 00:11:14.892 clat percentiles (usec): 00:11:14.892 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 355], 20.00th=[ 461], 00:11:14.892 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:14.892 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:11:14.892 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:14.892 | 99.99th=[42206] 00:11:14.892 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:11:14.892 slat (nsec): min=6278, max=67306, avg=14018.50, stdev=7098.35 00:11:14.892 clat (usec): min=188, max=487, avg=236.50, stdev=32.54 00:11:14.892 lat (usec): min=197, max=527, avg=250.52, stdev=33.75 00:11:14.892 clat percentiles (usec): 00:11:14.892 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:11:14.892 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 239], 00:11:14.892 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 262], 95.00th=[ 277], 00:11:14.892 | 99.00th=[ 404], 99.50th=[ 453], 99.90th=[ 486], 99.95th=[ 486], 00:11:14.892 | 99.99th=[ 486] 00:11:14.892 bw ( KiB/s): min= 4096, max= 4096, per=25.98%, avg=4096.00, stdev= 0.00, samples=1 00:11:14.892 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:14.892 lat (usec) : 250=77.96%, 500=17.96% 00:11:14.892 lat (msec) : 50=4.07% 00:11:14.892 cpu : usr=0.67%, sys=0.29%, ctx=542, majf=0, minf=1 00:11:14.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.892 issued rwts: total=28,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.892 00:11:14.892 Run status group 0 (all jobs): 00:11:14.892 READ: bw=8204KiB/s (8401kB/s), 79.1KiB/s-6055KiB/s (81.0kB/s-6200kB/s), io=8524KiB (8729kB), run=1007-1039msec 00:11:14.892 WRITE: bw=15.4MiB/s (16.1MB/s), 1971KiB/s-8063KiB/s (2018kB/s-8257kB/s), io=16.0MiB (16.8MB), run=1007-1039msec 00:11:14.892 00:11:14.892 Disk stats (read/write): 00:11:14.892 nvme0n1: ios=1561/1626, merge=0/0, ticks=1241/318, in_queue=1559, util=87.37% 00:11:14.892 nvme0n2: ios=51/512, merge=0/0, ticks=1573/157, in_queue=1730, util=89.74% 00:11:14.892 nvme0n3: ios=549/1024, merge=0/0, ticks=1405/286, in_queue=1691, util=93.54% 00:11:14.892 nvme0n4: ios=80/512, merge=0/0, ticks=790/114, in_queue=904, util=95.80% 00:11:14.892 00:48:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:14.892 [global] 00:11:14.892 thread=1 00:11:14.892 invalidate=1 00:11:14.892 rw=randwrite 00:11:14.892 time_based=1 00:11:14.892 runtime=1 00:11:14.892 ioengine=libaio 00:11:14.892 direct=1 00:11:14.892 bs=4096 00:11:14.892 iodepth=1 00:11:14.892 norandommap=0 00:11:14.892 numjobs=1 00:11:14.892 00:11:14.892 verify_dump=1 00:11:14.892 verify_backlog=512 00:11:14.892 verify_state_save=0 00:11:14.892 do_verify=1 00:11:14.892 verify=crc32c-intel 00:11:14.892 [job0] 00:11:14.892 filename=/dev/nvme0n1 00:11:14.892 [job1] 00:11:14.892 filename=/dev/nvme0n2 00:11:14.892 [job2] 00:11:14.892 filename=/dev/nvme0n3 00:11:14.892 [job3] 00:11:14.892 filename=/dev/nvme0n4 00:11:14.892 Could not set queue depth (nvme0n1) 00:11:14.892 Could not set queue depth (nvme0n2) 00:11:14.892 Could not set queue depth (nvme0n3) 00:11:14.892 Could not set queue depth (nvme0n4) 00:11:14.892 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.892 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.892 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.892 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.892 fio-3.35 00:11:14.892 Starting 4 threads 00:11:16.267 00:11:16.267 job0: (groupid=0, jobs=1): err= 0: pid=2884834: Fri Dec 6 00:48:48 2024 00:11:16.267 read: IOPS=19, BW=78.3KiB/s (80.2kB/s)(80.0KiB/1022msec) 00:11:16.267 slat (nsec): min=14061, max=35847, avg=18892.55, stdev=6209.69 00:11:16.267 clat (usec): min=40857, max=42056, avg=41633.29, stdev=481.13 00:11:16.267 lat (usec): min=40872, max=42070, avg=41652.18, stdev=482.25 00:11:16.267 clat percentiles (usec): 00:11:16.267 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:16.267 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:11:16.267 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:16.267 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:16.267 | 99.99th=[42206] 00:11:16.267 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:11:16.267 slat (nsec): min=6457, max=76541, avg=19854.81, stdev=9972.83 00:11:16.267 clat (usec): min=183, max=616, avg=342.91, stdev=86.02 00:11:16.267 lat (usec): min=194, max=636, avg=362.77, stdev=84.29 00:11:16.267 clat percentiles (usec): 00:11:16.267 | 1.00th=[ 194], 5.00th=[ 217], 10.00th=[ 241], 20.00th=[ 277], 00:11:16.267 | 30.00th=[ 289], 40.00th=[ 306], 50.00th=[ 326], 60.00th=[ 347], 00:11:16.267 | 70.00th=[ 379], 80.00th=[ 420], 90.00th=[ 465], 95.00th=[ 510], 00:11:16.267 | 99.00th=[ 562], 99.50th=[ 586], 99.90th=[ 619], 99.95th=[ 619], 00:11:16.267 | 99.99th=[ 619] 00:11:16.267 bw ( KiB/s): min= 4096, max= 4096, per=25.83%, avg=4096.00, stdev= 0.00, samples=1 00:11:16.267 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:16.267 lat (usec) : 250=12.97%, 500=77.63%, 750=5.64% 00:11:16.267 lat (msec) : 50=3.76% 00:11:16.267 cpu : usr=0.39%, sys=1.08%, ctx=533, majf=0, minf=1 00:11:16.267 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.267 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.267 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.267 job1: (groupid=0, jobs=1): err= 0: pid=2884835: Fri Dec 6 00:48:48 2024 00:11:16.267 read: IOPS=76, BW=308KiB/s (315kB/s)(308KiB/1001msec) 00:11:16.267 slat (nsec): min=7099, max=36169, avg=15638.91, stdev=7172.61 00:11:16.267 clat (usec): min=246, max=42168, avg=10970.82, stdev=18132.66 00:11:16.267 lat (usec): min=254, max=42184, avg=10986.46, stdev=18134.39 00:11:16.267 clat percentiles (usec): 00:11:16.267 | 1.00th=[ 247], 5.00th=[ 251], 10.00th=[ 265], 20.00th=[ 273], 00:11:16.267 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 314], 60.00th=[ 326], 00:11:16.267 | 70.00th=[ 363], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:11:16.267 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:16.267 | 99.99th=[42206] 00:11:16.267 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:16.267 slat (nsec): min=6938, max=52560, avg=14180.86, stdev=6946.11 00:11:16.267 clat (usec): min=182, max=565, avg=282.81, stdev=100.71 00:11:16.267 lat (usec): min=192, max=597, avg=296.99, stdev=102.18 00:11:16.267 clat percentiles (usec): 00:11:16.267 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 202], 00:11:16.267 | 30.00th=[ 208], 40.00th=[ 217], 50.00th=[ 231], 60.00th=[ 247], 00:11:16.267 | 70.00th=[ 343], 80.00th=[ 388], 90.00th=[ 441], 95.00th=[ 490], 00:11:16.267 | 99.00th=[ 519], 99.50th=[ 529], 99.90th=[ 570], 99.95th=[ 570], 00:11:16.267 | 99.99th=[ 570] 00:11:16.267 bw ( KiB/s): min= 4096, max= 4096, per=25.83%, avg=4096.00, stdev= 0.00, samples=1 00:11:16.267 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:16.267 lat (usec) : 250=53.31%, 500=40.58%, 750=2.72% 00:11:16.267 lat (msec) : 50=3.40% 00:11:16.267 cpu : usr=0.70%, sys=0.80%, ctx=590, majf=0, minf=2 00:11:16.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.268 issued rwts: total=77,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.268 job2: (groupid=0, jobs=1): err= 0: pid=2884836: Fri Dec 6 00:48:48 2024 00:11:16.268 read: IOPS=1625, BW=6501KiB/s (6658kB/s)(6508KiB/1001msec) 00:11:16.268 slat (nsec): min=4399, max=64555, avg=14530.04, stdev=10235.38 00:11:16.268 clat (usec): min=210, max=1584, avg=297.92, stdev=100.04 00:11:16.268 lat (usec): min=218, max=1603, avg=312.45, stdev=103.62 00:11:16.268 clat percentiles (usec): 00:11:16.268 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:11:16.268 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 260], 60.00th=[ 277], 00:11:16.268 | 70.00th=[ 314], 80.00th=[ 343], 90.00th=[ 392], 95.00th=[ 478], 00:11:16.268 | 99.00th=[ 553], 99.50th=[ 709], 99.90th=[ 1418], 99.95th=[ 1582], 00:11:16.268 | 99.99th=[ 1582] 00:11:16.268 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:16.268 slat (nsec): min=5781, max=70135, avg=15239.21, stdev=10259.35 00:11:16.268 clat (usec): min=166, max=582, avg=217.66, stdev=47.30 00:11:16.268 lat (usec): min=177, max=627, avg=232.90, stdev=51.34 00:11:16.268 clat percentiles (usec): 00:11:16.268 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:11:16.268 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 212], 00:11:16.268 | 70.00th=[ 223], 80.00th=[ 241], 90.00th=[ 277], 95.00th=[ 289], 00:11:16.268 | 99.00th=[ 465], 99.50th=[ 498], 99.90th=[ 510], 99.95th=[ 515], 00:11:16.268 | 99.99th=[ 586] 00:11:16.268 bw ( KiB/s): min= 8175, max= 8175, per=51.54%, avg=8175.00, stdev= 0.00, samples=1 00:11:16.268 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:11:16.268 lat (usec) : 250=64.30%, 500=34.20%, 750=1.28%, 1000=0.03% 00:11:16.268 lat (msec) : 2=0.19% 00:11:16.268 cpu : usr=2.50%, sys=5.90%, ctx=3677, majf=0, minf=1 00:11:16.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.268 issued rwts: total=1627,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.268 job3: (groupid=0, jobs=1): err= 0: pid=2884837: Fri Dec 6 00:48:48 2024 00:11:16.268 read: IOPS=509, BW=2037KiB/s (2086kB/s)(2104KiB/1033msec) 00:11:16.268 slat (nsec): min=4554, max=42151, avg=8738.19, stdev=5058.92 00:11:16.268 clat (usec): min=228, max=41996, avg=1384.96, stdev=6619.47 00:11:16.268 lat (usec): min=234, max=42016, avg=1393.70, stdev=6621.92 00:11:16.268 clat percentiles (usec): 00:11:16.268 | 1.00th=[ 235], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 251], 00:11:16.268 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:11:16.268 | 70.00th=[ 302], 80.00th=[ 359], 90.00th=[ 392], 95.00th=[ 408], 00:11:16.268 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:16.268 | 99.99th=[42206] 00:11:16.268 write: IOPS=991, BW=3965KiB/s (4060kB/s)(4096KiB/1033msec); 0 zone resets 00:11:16.268 slat (nsec): min=6485, max=77216, avg=13085.14, stdev=8435.74 00:11:16.268 clat (usec): min=178, max=837, avg=274.48, stdev=91.78 00:11:16.268 lat (usec): min=185, max=859, avg=287.56, stdev=95.74 00:11:16.268 clat percentiles (usec): 00:11:16.268 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:11:16.268 | 30.00th=[ 204], 40.00th=[ 215], 50.00th=[ 233], 60.00th=[ 277], 00:11:16.268 | 70.00th=[ 310], 80.00th=[ 359], 90.00th=[ 408], 95.00th=[ 453], 00:11:16.268 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 635], 99.95th=[ 840], 00:11:16.268 | 99.99th=[ 840] 00:11:16.268 bw ( KiB/s): min= 8192, max= 8192, per=51.65%, avg=8192.00, stdev= 0.00, samples=1 00:11:16.268 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:16.268 lat (usec) : 250=41.29%, 500=56.32%, 750=1.42%, 1000=0.06% 00:11:16.268 lat (msec) : 50=0.90% 00:11:16.268 cpu : usr=0.97%, sys=1.94%, ctx=1551, majf=0, minf=1 00:11:16.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.268 issued rwts: total=526,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.268 00:11:16.268 Run status group 0 (all jobs): 00:11:16.268 READ: bw=8712KiB/s (8922kB/s), 78.3KiB/s-6501KiB/s (80.2kB/s-6658kB/s), io=9000KiB (9216kB), run=1001-1033msec 00:11:16.268 WRITE: bw=15.5MiB/s (16.2MB/s), 2004KiB/s-8184KiB/s (2052kB/s-8380kB/s), io=16.0MiB (16.8MB), run=1001-1033msec 00:11:16.268 00:11:16.268 Disk stats (read/write): 00:11:16.268 nvme0n1: ios=61/512, merge=0/0, ticks=734/163, in_queue=897, util=87.27% 00:11:16.268 nvme0n2: ios=121/512, merge=0/0, ticks=928/140, in_queue=1068, util=90.06% 00:11:16.268 nvme0n3: ios=1595/1575, merge=0/0, ticks=684/337, in_queue=1021, util=94.18% 00:11:16.268 nvme0n4: ios=569/1024, merge=0/0, ticks=596/268, in_queue=864, util=95.50% 00:11:16.268 00:48:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:16.268 [global] 00:11:16.268 thread=1 00:11:16.268 invalidate=1 00:11:16.268 rw=write 00:11:16.268 time_based=1 00:11:16.268 runtime=1 00:11:16.268 ioengine=libaio 00:11:16.268 direct=1 00:11:16.268 bs=4096 00:11:16.268 iodepth=128 00:11:16.268 norandommap=0 00:11:16.268 numjobs=1 00:11:16.268 00:11:16.268 verify_dump=1 00:11:16.268 verify_backlog=512 00:11:16.268 verify_state_save=0 00:11:16.268 do_verify=1 00:11:16.268 verify=crc32c-intel 00:11:16.268 [job0] 00:11:16.268 filename=/dev/nvme0n1 00:11:16.268 [job1] 00:11:16.268 filename=/dev/nvme0n2 00:11:16.268 [job2] 00:11:16.268 filename=/dev/nvme0n3 00:11:16.268 [job3] 00:11:16.268 filename=/dev/nvme0n4 00:11:16.268 Could not set queue depth (nvme0n1) 00:11:16.268 Could not set queue depth (nvme0n2) 00:11:16.268 Could not set queue depth (nvme0n3) 00:11:16.268 Could not set queue depth (nvme0n4) 00:11:16.268 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.268 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.268 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.268 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.268 fio-3.35 00:11:16.268 Starting 4 threads 00:11:17.642 00:11:17.642 job0: (groupid=0, jobs=1): err= 0: pid=2885068: Fri Dec 6 00:48:50 2024 00:11:17.642 read: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec) 00:11:17.642 slat (usec): min=2, max=13769, avg=167.16, stdev=1093.58 00:11:17.642 clat (usec): min=10612, max=55475, avg=19416.28, stdev=5538.84 00:11:17.642 lat (usec): min=10622, max=56289, avg=19583.44, stdev=5678.53 00:11:17.643 clat percentiles (usec): 00:11:17.643 | 1.00th=[11994], 5.00th=[13566], 10.00th=[15270], 20.00th=[15533], 00:11:17.643 | 30.00th=[16450], 40.00th=[16712], 50.00th=[17171], 60.00th=[17695], 00:11:17.643 | 70.00th=[21365], 80.00th=[24773], 90.00th=[25560], 95.00th=[27919], 00:11:17.643 | 99.00th=[39584], 99.50th=[47973], 99.90th=[55313], 99.95th=[55313], 00:11:17.643 | 99.99th=[55313] 00:11:17.643 write: IOPS=2898, BW=11.3MiB/s (11.9MB/s)(11.5MiB/1013msec); 0 zone resets 00:11:17.643 slat (usec): min=3, max=32331, avg=190.29, stdev=1192.86 00:11:17.643 clat (usec): min=739, max=70683, avg=26815.23, stdev=13062.40 00:11:17.643 lat (usec): min=747, max=71593, avg=27005.52, stdev=13148.75 00:11:17.643 clat percentiles (usec): 00:11:17.643 | 1.00th=[ 6587], 5.00th=[13173], 10.00th=[13960], 20.00th=[14222], 00:11:17.643 | 30.00th=[15926], 40.00th=[22152], 50.00th=[25297], 60.00th=[28181], 00:11:17.643 | 70.00th=[30016], 80.00th=[38011], 90.00th=[45876], 95.00th=[50594], 00:11:17.643 | 99.00th=[66323], 99.50th=[69731], 99.90th=[70779], 99.95th=[70779], 00:11:17.643 | 99.99th=[70779] 00:11:17.643 bw ( KiB/s): min=10176, max=12288, per=25.59%, avg=11232.00, stdev=1493.41, samples=2 00:11:17.643 iops : min= 2544, max= 3072, avg=2808.00, stdev=373.35, samples=2 00:11:17.643 lat (usec) : 750=0.04% 00:11:17.643 lat (msec) : 4=0.02%, 10=1.42%, 20=48.85%, 50=46.14%, 100=3.53% 00:11:17.643 cpu : usr=2.57%, sys=3.95%, ctx=282, majf=0, minf=1 00:11:17.643 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:17.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.643 issued rwts: total=2560,2936,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.643 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.643 job1: (groupid=0, jobs=1): err= 0: pid=2885069: Fri Dec 6 00:48:50 2024 00:11:17.643 read: IOPS=2649, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1014msec) 00:11:17.643 slat (usec): min=2, max=28921, avg=198.58, stdev=1328.53 00:11:17.643 clat (usec): min=4155, max=82294, avg=21497.71, stdev=16605.79 00:11:17.643 lat (usec): min=5827, max=82311, avg=21696.30, stdev=16731.13 00:11:17.643 clat percentiles (usec): 00:11:17.643 | 1.00th=[ 7767], 5.00th=[10028], 10.00th=[11076], 20.00th=[12518], 00:11:17.643 | 30.00th=[12911], 40.00th=[12911], 50.00th=[13173], 60.00th=[16057], 00:11:17.643 | 70.00th=[17957], 80.00th=[30016], 90.00th=[44827], 95.00th=[63701], 00:11:17.643 | 99.00th=[79168], 99.50th=[81265], 99.90th=[82314], 99.95th=[82314], 00:11:17.643 | 99.99th=[82314] 00:11:17.643 write: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec); 0 zone resets 00:11:17.643 slat (usec): min=3, max=20449, avg=130.19, stdev=825.32 00:11:17.643 clat (usec): min=4361, max=82320, avg=23095.01, stdev=12559.54 00:11:17.643 lat (usec): min=4368, max=82342, avg=23225.20, stdev=12603.87 00:11:17.643 clat percentiles (usec): 00:11:17.643 | 1.00th=[ 5604], 5.00th=[10159], 10.00th=[12518], 20.00th=[13698], 00:11:17.643 | 30.00th=[16319], 40.00th=[18220], 50.00th=[21103], 60.00th=[24773], 00:11:17.643 | 70.00th=[25560], 80.00th=[25822], 90.00th=[35390], 95.00th=[52691], 00:11:17.643 | 99.00th=[70779], 99.50th=[71828], 99.90th=[81265], 99.95th=[82314], 00:11:17.643 | 99.99th=[82314] 00:11:17.643 bw ( KiB/s): min=12280, max=12288, per=27.98%, avg=12284.00, stdev= 5.66, samples=2 00:11:17.643 iops : min= 3070, max= 3072, avg=3071.00, stdev= 1.41, samples=2 00:11:17.643 lat (msec) : 10=4.88%, 20=53.17%, 50=35.65%, 100=6.30% 00:11:17.643 cpu : usr=3.65%, sys=6.02%, ctx=285, majf=0, minf=1 00:11:17.643 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:17.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.643 issued rwts: total=2687,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.643 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.643 job2: (groupid=0, jobs=1): err= 0: pid=2885070: Fri Dec 6 00:48:50 2024 00:11:17.643 read: IOPS=2331, BW=9325KiB/s (9549kB/s)(9372KiB/1005msec) 00:11:17.643 slat (usec): min=2, max=17463, avg=154.00, stdev=1028.20 00:11:17.643 clat (usec): min=789, max=55239, avg=18458.96, stdev=7478.47 00:11:17.643 lat (usec): min=6520, max=68590, avg=18612.95, stdev=7584.57 00:11:17.643 clat percentiles (usec): 00:11:17.643 | 1.00th=[ 6849], 5.00th=[11207], 10.00th=[11338], 20.00th=[13042], 00:11:17.643 | 30.00th=[13960], 40.00th=[14353], 50.00th=[15008], 60.00th=[17695], 00:11:17.643 | 70.00th=[20317], 80.00th=[23462], 90.00th=[31851], 95.00th=[33817], 00:11:17.643 | 99.00th=[38011], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 00:11:17.643 | 99.99th=[55313] 00:11:17.643 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:11:17.643 slat (usec): min=4, max=25945, avg=236.60, stdev=1356.71 00:11:17.643 clat (msec): min=4, max=103, avg=32.86, stdev=17.18 00:11:17.643 lat (msec): min=4, max=105, avg=33.10, stdev=17.25 00:11:17.643 clat percentiles (msec): 00:11:17.643 | 1.00th=[ 7], 5.00th=[ 13], 10.00th=[ 18], 20.00th=[ 25], 00:11:17.643 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 29], 00:11:17.643 | 70.00th=[ 29], 80.00th=[ 47], 90.00th=[ 59], 95.00th=[ 64], 00:11:17.643 | 99.00th=[ 100], 99.50th=[ 103], 99.90th=[ 104], 99.95th=[ 104], 00:11:17.643 | 99.99th=[ 104] 00:11:17.643 bw ( KiB/s): min= 8880, max=11600, per=23.33%, avg=10240.00, stdev=1923.33, samples=2 00:11:17.643 iops : min= 2220, max= 2900, avg=2560.00, stdev=480.83, samples=2 00:11:17.643 lat (usec) : 1000=0.02% 00:11:17.643 lat (msec) : 10=2.79%, 20=36.51%, 50=50.99%, 100=9.20%, 250=0.49% 00:11:17.643 cpu : usr=4.58%, sys=4.98%, ctx=305, majf=0, minf=1 00:11:17.643 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:17.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.643 issued rwts: total=2343,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.643 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.643 job3: (groupid=0, jobs=1): err= 0: pid=2885071: Fri Dec 6 00:48:50 2024 00:11:17.643 read: IOPS=2289, BW=9158KiB/s (9378kB/s)(9268KiB/1012msec) 00:11:17.643 slat (usec): min=2, max=25057, avg=177.51, stdev=1290.22 00:11:17.643 clat (usec): min=5479, max=61393, avg=23161.99, stdev=7502.98 00:11:17.643 lat (usec): min=11846, max=61396, avg=23339.50, stdev=7590.11 00:11:17.643 clat percentiles (usec): 00:11:17.643 | 1.00th=[12518], 5.00th=[16319], 10.00th=[17433], 20.00th=[17433], 00:11:17.643 | 30.00th=[17695], 40.00th=[17957], 50.00th=[21365], 60.00th=[24773], 00:11:17.643 | 70.00th=[25297], 80.00th=[27657], 90.00th=[36439], 95.00th=[38536], 00:11:17.643 | 99.00th=[50070], 99.50th=[59507], 99.90th=[61080], 99.95th=[61080], 00:11:17.643 | 99.99th=[61604] 00:11:17.643 write: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec); 0 zone resets 00:11:17.643 slat (usec): min=3, max=28675, avg=217.58, stdev=1143.31 00:11:17.643 clat (usec): min=7616, max=62349, avg=28967.37, stdev=14023.28 00:11:17.643 lat (usec): min=7637, max=62361, avg=29184.96, stdev=14117.63 00:11:17.643 clat percentiles (usec): 00:11:17.643 | 1.00th=[10421], 5.00th=[13566], 10.00th=[14746], 20.00th=[15139], 00:11:17.643 | 30.00th=[17433], 40.00th=[22152], 50.00th=[25822], 60.00th=[28705], 00:11:17.643 | 70.00th=[35914], 80.00th=[42730], 90.00th=[52691], 95.00th=[57410], 00:11:17.643 | 99.00th=[61080], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:11:17.643 | 99.99th=[62129] 00:11:17.643 bw ( KiB/s): min=10040, max=10440, per=23.33%, avg=10240.00, stdev=282.84, samples=2 00:11:17.643 iops : min= 2510, max= 2610, avg=2560.00, stdev=70.71, samples=2 00:11:17.643 lat (msec) : 10=0.45%, 20=40.60%, 50=52.70%, 100=6.25% 00:11:17.643 cpu : usr=1.98%, sys=3.66%, ctx=257, majf=0, minf=1 00:11:17.643 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:17.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.643 issued rwts: total=2317,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.643 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.643 00:11:17.643 Run status group 0 (all jobs): 00:11:17.643 READ: bw=38.2MiB/s (40.0MB/s), 9158KiB/s-10.4MiB/s (9378kB/s-10.9MB/s), io=38.7MiB (40.6MB), run=1005-1014msec 00:11:17.643 WRITE: bw=42.9MiB/s (44.9MB/s), 9.88MiB/s-11.8MiB/s (10.4MB/s-12.4MB/s), io=43.5MiB (45.6MB), run=1005-1014msec 00:11:17.643 00:11:17.643 Disk stats (read/write): 00:11:17.643 nvme0n1: ios=2163/2560, merge=0/0, ticks=22146/37921, in_queue=60067, util=98.20% 00:11:17.643 nvme0n2: ios=2048/2559, merge=0/0, ticks=40134/57300, in_queue=97434, util=84.37% 00:11:17.643 nvme0n3: ios=2094/2071, merge=0/0, ticks=38328/67043, in_queue=105371, util=97.81% 00:11:17.643 nvme0n4: ios=2048/2191, merge=0/0, ticks=21235/29461, in_queue=50696, util=88.98% 00:11:17.643 00:48:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:17.643 [global] 00:11:17.643 thread=1 00:11:17.643 invalidate=1 00:11:17.643 rw=randwrite 00:11:17.643 time_based=1 00:11:17.643 runtime=1 00:11:17.643 ioengine=libaio 00:11:17.643 direct=1 00:11:17.643 bs=4096 00:11:17.643 iodepth=128 00:11:17.643 norandommap=0 00:11:17.643 numjobs=1 00:11:17.643 00:11:17.643 verify_dump=1 00:11:17.643 verify_backlog=512 00:11:17.643 verify_state_save=0 00:11:17.643 do_verify=1 00:11:17.643 verify=crc32c-intel 00:11:17.643 [job0] 00:11:17.643 filename=/dev/nvme0n1 00:11:17.643 [job1] 00:11:17.643 filename=/dev/nvme0n2 00:11:17.643 [job2] 00:11:17.643 filename=/dev/nvme0n3 00:11:17.643 [job3] 00:11:17.643 filename=/dev/nvme0n4 00:11:17.643 Could not set queue depth (nvme0n1) 00:11:17.643 Could not set queue depth (nvme0n2) 00:11:17.643 Could not set queue depth (nvme0n3) 00:11:17.643 Could not set queue depth (nvme0n4) 00:11:17.901 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:17.901 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:17.901 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:17.901 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:17.901 fio-3.35 00:11:17.901 Starting 4 threads 00:11:19.272 00:11:19.272 job0: (groupid=0, jobs=1): err= 0: pid=2885423: Fri Dec 6 00:48:51 2024 00:11:19.272 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:11:19.272 slat (usec): min=2, max=15356, avg=127.63, stdev=821.95 00:11:19.273 clat (usec): min=5205, max=66553, avg=16441.72, stdev=8302.44 00:11:19.273 lat (usec): min=5216, max=66594, avg=16569.35, stdev=8388.71 00:11:19.273 clat percentiles (usec): 00:11:19.273 | 1.00th=[ 6390], 5.00th=[11207], 10.00th=[11994], 20.00th=[12518], 00:11:19.273 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13829], 60.00th=[15270], 00:11:19.273 | 70.00th=[16188], 80.00th=[17171], 90.00th=[21365], 95.00th=[34866], 00:11:19.273 | 99.00th=[51643], 99.50th=[53216], 99.90th=[58459], 99.95th=[60031], 00:11:19.273 | 99.99th=[66323] 00:11:19.273 write: IOPS=3881, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1002msec); 0 zone resets 00:11:19.273 slat (usec): min=3, max=12084, avg=121.96, stdev=651.60 00:11:19.273 clat (usec): min=306, max=80549, avg=17518.11, stdev=11898.45 00:11:19.273 lat (usec): min=1160, max=80590, avg=17640.07, stdev=11973.06 00:11:19.273 clat percentiles (usec): 00:11:19.273 | 1.00th=[ 2802], 5.00th=[ 6783], 10.00th=[ 8979], 20.00th=[12256], 00:11:19.273 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13566], 60.00th=[14222], 00:11:19.273 | 70.00th=[15270], 80.00th=[24511], 90.00th=[27919], 95.00th=[29754], 00:11:19.273 | 99.00th=[74974], 99.50th=[74974], 99.90th=[80217], 99.95th=[80217], 00:11:19.273 | 99.99th=[80217] 00:11:19.273 bw ( KiB/s): min=14968, max=15128, per=30.28%, avg=15048.00, stdev=113.14, samples=2 00:11:19.273 iops : min= 3742, max= 3782, avg=3762.00, stdev=28.28, samples=2 00:11:19.273 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:19.273 lat (msec) : 2=0.13%, 4=0.40%, 10=6.89%, 20=73.96%, 50=15.66% 00:11:19.273 lat (msec) : 100=2.92% 00:11:19.273 cpu : usr=5.59%, sys=6.79%, ctx=409, majf=0, minf=1 00:11:19.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:19.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:19.273 issued rwts: total=3584,3889,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.273 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:19.273 job1: (groupid=0, jobs=1): err= 0: pid=2885424: Fri Dec 6 00:48:51 2024 00:11:19.273 read: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec) 00:11:19.273 slat (usec): min=3, max=23117, avg=153.23, stdev=1124.59 00:11:19.273 clat (usec): min=5526, max=46965, avg=18237.45, stdev=6524.33 00:11:19.273 lat (usec): min=5544, max=47003, avg=18390.68, stdev=6602.29 00:11:19.273 clat percentiles (usec): 00:11:19.273 | 1.00th=[ 7635], 5.00th=[10945], 10.00th=[12649], 20.00th=[12911], 00:11:19.273 | 30.00th=[14091], 40.00th=[14746], 50.00th=[15926], 60.00th=[17957], 00:11:19.273 | 70.00th=[19792], 80.00th=[23987], 90.00th=[28705], 95.00th=[31851], 00:11:19.273 | 99.00th=[37487], 99.50th=[39060], 99.90th=[40633], 99.95th=[45351], 00:11:19.273 | 99.99th=[46924] 00:11:19.273 write: IOPS=2891, BW=11.3MiB/s (11.8MB/s)(11.4MiB/1012msec); 0 zone resets 00:11:19.273 slat (usec): min=4, max=39018, avg=192.08, stdev=1255.56 00:11:19.273 clat (usec): min=3113, max=68317, avg=27846.18, stdev=14501.04 00:11:19.273 lat (usec): min=3121, max=68333, avg=28038.26, stdev=14617.00 00:11:19.273 clat percentiles (usec): 00:11:19.273 | 1.00th=[ 4359], 5.00th=[ 7635], 10.00th=[10945], 20.00th=[13960], 00:11:19.273 | 30.00th=[17695], 40.00th=[22676], 50.00th=[27919], 60.00th=[29230], 00:11:19.273 | 70.00th=[30540], 80.00th=[41157], 90.00th=[51119], 95.00th=[57410], 00:11:19.273 | 99.00th=[59507], 99.50th=[60031], 99.90th=[61604], 99.95th=[61604], 00:11:19.273 | 99.99th=[68682] 00:11:19.273 bw ( KiB/s): min=10376, max=12016, per=22.53%, avg=11196.00, stdev=1159.66, samples=2 00:11:19.273 iops : min= 2594, max= 3004, avg=2799.00, stdev=289.91, samples=2 00:11:19.273 lat (msec) : 4=0.18%, 10=5.98%, 20=44.33%, 50=43.49%, 100=6.02% 00:11:19.273 cpu : usr=3.36%, sys=6.43%, ctx=309, majf=0, minf=1 00:11:19.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:19.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:19.273 issued rwts: total=2560,2926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.273 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:19.273 job2: (groupid=0, jobs=1): err= 0: pid=2885425: Fri Dec 6 00:48:51 2024 00:11:19.273 read: IOPS=2446, BW=9787KiB/s (10.0MB/s)(9.97MiB/1043msec) 00:11:19.273 slat (usec): min=2, max=17907, avg=167.82, stdev=1142.02 00:11:19.273 clat (usec): min=7486, max=62187, avg=22638.31, stdev=11057.39 00:11:19.273 lat (usec): min=7495, max=71134, avg=22806.13, stdev=11104.04 00:11:19.273 clat percentiles (usec): 00:11:19.273 | 1.00th=[10028], 5.00th=[14222], 10.00th=[14615], 20.00th=[14877], 00:11:19.273 | 30.00th=[15664], 40.00th=[18220], 50.00th=[19006], 60.00th=[19530], 00:11:19.273 | 70.00th=[21627], 80.00th=[27919], 90.00th=[39584], 95.00th=[47449], 00:11:19.273 | 99.00th=[61604], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:11:19.273 | 99.99th=[62129] 00:11:19.273 write: IOPS=2454, BW=9818KiB/s (10.1MB/s)(10.0MiB/1043msec); 0 zone resets 00:11:19.273 slat (usec): min=3, max=28173, avg=203.67, stdev=1289.74 00:11:19.273 clat (usec): min=4498, max=74103, avg=29138.65, stdev=13702.67 00:11:19.273 lat (usec): min=4506, max=74111, avg=29342.32, stdev=13815.27 00:11:19.273 clat percentiles (usec): 00:11:19.273 | 1.00th=[ 6652], 5.00th=[10159], 10.00th=[13304], 20.00th=[16909], 00:11:19.273 | 30.00th=[25297], 40.00th=[27395], 50.00th=[28181], 60.00th=[29230], 00:11:19.273 | 70.00th=[29754], 80.00th=[34866], 90.00th=[50070], 95.00th=[55837], 00:11:19.273 | 99.00th=[73925], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:11:19.273 | 99.99th=[73925] 00:11:19.273 bw ( KiB/s): min= 8200, max=12280, per=20.60%, avg=10240.00, stdev=2885.00, samples=2 00:11:19.273 iops : min= 2050, max= 3070, avg=2560.00, stdev=721.25, samples=2 00:11:19.273 lat (msec) : 10=2.58%, 20=42.86%, 50=46.99%, 100=7.57% 00:11:19.273 cpu : usr=2.59%, sys=5.37%, ctx=266, majf=0, minf=1 00:11:19.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:19.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:19.273 issued rwts: total=2552,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.273 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:19.273 job3: (groupid=0, jobs=1): err= 0: pid=2885426: Fri Dec 6 00:48:51 2024 00:11:19.273 read: IOPS=3403, BW=13.3MiB/s (13.9MB/s)(13.3MiB/1003msec) 00:11:19.273 slat (usec): min=3, max=16023, avg=131.75, stdev=825.37 00:11:19.273 clat (usec): min=735, max=49724, avg=16157.32, stdev=4354.18 00:11:19.273 lat (usec): min=4845, max=49749, avg=16289.07, stdev=4407.30 00:11:19.273 clat percentiles (usec): 00:11:19.273 | 1.00th=[ 5407], 5.00th=[12256], 10.00th=[14353], 20.00th=[14877], 00:11:19.273 | 30.00th=[15008], 40.00th=[15139], 50.00th=[15401], 60.00th=[15926], 00:11:19.273 | 70.00th=[16188], 80.00th=[16909], 90.00th=[18482], 95.00th=[21365], 00:11:19.273 | 99.00th=[33817], 99.50th=[49546], 99.90th=[49546], 99.95th=[49546], 00:11:19.273 | 99.99th=[49546] 00:11:19.273 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:11:19.273 slat (usec): min=3, max=16312, avg=142.21, stdev=897.21 00:11:19.273 clat (usec): min=2791, max=66834, avg=20091.47, stdev=11477.01 00:11:19.273 lat (usec): min=2798, max=66855, avg=20233.68, stdev=11550.38 00:11:19.273 clat percentiles (usec): 00:11:19.273 | 1.00th=[ 6652], 5.00th=[11863], 10.00th=[13435], 20.00th=[14746], 00:11:19.273 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15664], 60.00th=[16057], 00:11:19.273 | 70.00th=[16581], 80.00th=[26084], 90.00th=[32375], 95.00th=[47973], 00:11:19.273 | 99.00th=[66323], 99.50th=[66847], 99.90th=[66847], 99.95th=[66847], 00:11:19.273 | 99.99th=[66847] 00:11:19.273 bw ( KiB/s): min=13760, max=14912, per=28.85%, avg=14336.00, stdev=814.59, samples=2 00:11:19.273 iops : min= 3440, max= 3728, avg=3584.00, stdev=203.65, samples=2 00:11:19.273 lat (usec) : 750=0.01% 00:11:19.273 lat (msec) : 4=0.20%, 10=1.61%, 20=82.14%, 50=13.70%, 100=2.33% 00:11:19.273 cpu : usr=4.19%, sys=8.18%, ctx=291, majf=0, minf=1 00:11:19.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:19.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:19.273 issued rwts: total=3414,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.273 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:19.273 00:11:19.273 Run status group 0 (all jobs): 00:11:19.273 READ: bw=45.4MiB/s (47.6MB/s), 9787KiB/s-14.0MiB/s (10.0MB/s-14.7MB/s), io=47.3MiB (49.6MB), run=1002-1043msec 00:11:19.273 WRITE: bw=48.5MiB/s (50.9MB/s), 9818KiB/s-15.2MiB/s (10.1MB/s-15.9MB/s), io=50.6MiB (53.1MB), run=1002-1043msec 00:11:19.273 00:11:19.273 Disk stats (read/write): 00:11:19.273 nvme0n1: ios=2913/3072, merge=0/0, ticks=31511/35355, in_queue=66866, util=89.38% 00:11:19.273 nvme0n2: ios=2099/2553, merge=0/0, ticks=36813/66808, in_queue=103621, util=93.39% 00:11:19.273 nvme0n3: ios=2105/2423, merge=0/0, ticks=34050/54143, in_queue=88193, util=94.87% 00:11:19.273 nvme0n4: ios=2679/3072, merge=0/0, ticks=21313/33715, in_queue=55028, util=91.56% 00:11:19.273 00:48:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:19.273 00:48:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2885558 00:11:19.273 00:48:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:19.273 00:48:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:19.273 [global] 00:11:19.273 thread=1 00:11:19.273 invalidate=1 00:11:19.273 rw=read 00:11:19.273 time_based=1 00:11:19.273 runtime=10 00:11:19.273 ioengine=libaio 00:11:19.273 direct=1 00:11:19.273 bs=4096 00:11:19.273 iodepth=1 00:11:19.273 norandommap=1 00:11:19.273 numjobs=1 00:11:19.273 00:11:19.273 [job0] 00:11:19.273 filename=/dev/nvme0n1 00:11:19.273 [job1] 00:11:19.273 filename=/dev/nvme0n2 00:11:19.273 [job2] 00:11:19.273 filename=/dev/nvme0n3 00:11:19.273 [job3] 00:11:19.273 filename=/dev/nvme0n4 00:11:19.273 Could not set queue depth (nvme0n1) 00:11:19.273 Could not set queue depth (nvme0n2) 00:11:19.273 Could not set queue depth (nvme0n3) 00:11:19.273 Could not set queue depth (nvme0n4) 00:11:19.273 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.273 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.274 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.274 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.274 fio-3.35 00:11:19.274 Starting 4 threads 00:11:22.549 00:48:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:22.549 00:48:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:22.549 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=18522112, buflen=4096 00:11:22.549 fio: pid=2885655, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:22.805 00:48:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:22.805 00:48:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:22.805 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=32337920, buflen=4096 00:11:22.805 fio: pid=2885654, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:23.063 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=37294080, buflen=4096 00:11:23.063 fio: pid=2885652, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:23.063 00:48:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:23.063 00:48:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:23.321 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=1576960, buflen=4096 00:11:23.321 fio: pid=2885653, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:11:23.321 00:48:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:23.321 00:48:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:23.321 00:11:23.321 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2885652: Fri Dec 6 00:48:55 2024 00:11:23.321 read: IOPS=2597, BW=10.1MiB/s (10.6MB/s)(35.6MiB/3506msec) 00:11:23.321 slat (usec): min=3, max=16803, avg=22.52, stdev=274.98 00:11:23.321 clat (usec): min=206, max=40911, avg=356.09, stdev=446.89 00:11:23.321 lat (usec): min=210, max=40917, avg=378.61, stdev=525.63 00:11:23.321 clat percentiles (usec): 00:11:23.321 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 243], 00:11:23.321 | 30.00th=[ 262], 40.00th=[ 302], 50.00th=[ 363], 60.00th=[ 388], 00:11:23.321 | 70.00th=[ 416], 80.00th=[ 437], 90.00th=[ 465], 95.00th=[ 506], 00:11:23.321 | 99.00th=[ 611], 99.50th=[ 644], 99.90th=[ 734], 99.95th=[ 889], 00:11:23.321 | 99.99th=[41157] 00:11:23.321 bw ( KiB/s): min= 8600, max=13856, per=44.19%, avg=10022.67, stdev=1945.48, samples=6 00:11:23.321 iops : min= 2150, max= 3464, avg=2505.67, stdev=486.37, samples=6 00:11:23.321 lat (usec) : 250=25.43%, 500=69.24%, 750=5.23%, 1000=0.05% 00:11:23.321 lat (msec) : 10=0.02%, 50=0.01% 00:11:23.321 cpu : usr=1.97%, sys=4.88%, ctx=9112, majf=0, minf=1 00:11:23.321 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:23.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.321 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.321 issued rwts: total=9106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.321 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:23.321 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2885653: Fri Dec 6 00:48:55 2024 00:11:23.321 read: IOPS=99, BW=399KiB/s (408kB/s)(1540KiB/3864msec) 00:11:23.321 slat (usec): min=8, max=11869, avg=114.29, stdev=907.16 00:11:23.321 clat (usec): min=287, max=42402, avg=9913.94, stdev=17090.86 00:11:23.321 lat (usec): min=306, max=50973, avg=10010.58, stdev=17165.86 00:11:23.321 clat percentiles (usec): 00:11:23.321 | 1.00th=[ 343], 5.00th=[ 404], 10.00th=[ 433], 20.00th=[ 519], 00:11:23.321 | 30.00th=[ 545], 40.00th=[ 562], 50.00th=[ 578], 60.00th=[ 594], 00:11:23.321 | 70.00th=[ 619], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:11:23.321 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:11:23.321 | 99.99th=[42206] 00:11:23.321 bw ( KiB/s): min= 165, max= 896, per=1.89%, avg=428.14, stdev=250.87, samples=7 00:11:23.321 iops : min= 41, max= 224, avg=107.00, stdev=62.76, samples=7 00:11:23.321 lat (usec) : 500=17.36%, 750=57.51%, 1000=1.55% 00:11:23.321 lat (msec) : 4=0.26%, 50=23.06% 00:11:23.321 cpu : usr=0.26%, sys=0.44%, ctx=389, majf=0, minf=1 00:11:23.321 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:23.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.321 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.321 issued rwts: total=386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.321 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:23.321 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2885654: Fri Dec 6 00:48:55 2024 00:11:23.322 read: IOPS=2448, BW=9792KiB/s (10.0MB/s)(30.8MiB/3225msec) 00:11:23.322 slat (nsec): min=4368, max=70540, avg=16321.77, stdev=9583.92 00:11:23.322 clat (usec): min=231, max=42106, avg=385.29, stdev=1659.70 00:11:23.322 lat (usec): min=235, max=42117, avg=401.61, stdev=1659.96 00:11:23.322 clat percentiles (usec): 00:11:23.322 | 1.00th=[ 249], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 281], 00:11:23.322 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 314], 00:11:23.322 | 70.00th=[ 330], 80.00th=[ 351], 90.00th=[ 375], 95.00th=[ 408], 00:11:23.322 | 99.00th=[ 537], 99.50th=[ 619], 99.90th=[41157], 99.95th=[41157], 00:11:23.322 | 99.99th=[42206] 00:11:23.322 bw ( KiB/s): min= 176, max=12536, per=42.48%, avg=9634.67, stdev=4715.02, samples=6 00:11:23.322 iops : min= 44, max= 3134, avg=2408.67, stdev=1178.76, samples=6 00:11:23.322 lat (usec) : 250=1.13%, 500=97.34%, 750=1.28%, 1000=0.05% 00:11:23.322 lat (msec) : 2=0.01%, 4=0.01%, 50=0.16% 00:11:23.322 cpu : usr=1.67%, sys=4.59%, ctx=7896, majf=0, minf=2 00:11:23.322 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:23.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.322 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.322 issued rwts: total=7896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.322 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:23.322 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2885655: Fri Dec 6 00:48:55 2024 00:11:23.322 read: IOPS=1539, BW=6157KiB/s (6304kB/s)(17.7MiB/2938msec) 00:11:23.322 slat (nsec): min=4392, max=71446, avg=21306.23, stdev=11611.95 00:11:23.322 clat (usec): min=231, max=42296, avg=618.18, stdev=2842.75 00:11:23.322 lat (usec): min=237, max=42311, avg=639.49, stdev=2842.64 00:11:23.322 clat percentiles (usec): 00:11:23.322 | 1.00th=[ 251], 5.00th=[ 293], 10.00th=[ 343], 20.00th=[ 371], 00:11:23.322 | 30.00th=[ 388], 40.00th=[ 404], 50.00th=[ 416], 60.00th=[ 429], 00:11:23.322 | 70.00th=[ 441], 80.00th=[ 461], 90.00th=[ 515], 95.00th=[ 578], 00:11:23.322 | 99.00th=[ 693], 99.50th=[ 938], 99.90th=[42206], 99.95th=[42206], 00:11:23.322 | 99.99th=[42206] 00:11:23.322 bw ( KiB/s): min= 200, max= 7992, per=27.00%, avg=6124.80, stdev=3321.54, samples=5 00:11:23.322 iops : min= 50, max= 1998, avg=1531.20, stdev=830.38, samples=5 00:11:23.322 lat (usec) : 250=0.86%, 500=88.35%, 750=10.06%, 1000=0.22% 00:11:23.322 lat (msec) : 50=0.49% 00:11:23.322 cpu : usr=1.29%, sys=3.78%, ctx=4523, majf=0, minf=2 00:11:23.322 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:23.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.322 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.322 issued rwts: total=4523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.322 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:23.322 00:11:23.322 Run status group 0 (all jobs): 00:11:23.322 READ: bw=22.1MiB/s (23.2MB/s), 399KiB/s-10.1MiB/s (408kB/s-10.6MB/s), io=85.6MiB (89.7MB), run=2938-3864msec 00:11:23.322 00:11:23.322 Disk stats (read/write): 00:11:23.322 nvme0n1: ios=8727/0, merge=0/0, ticks=3230/0, in_queue=3230, util=98.88% 00:11:23.322 nvme0n2: ios=385/0, merge=0/0, ticks=3809/0, in_queue=3809, util=96.23% 00:11:23.322 nvme0n3: ios=7572/0, merge=0/0, ticks=2885/0, in_queue=2885, util=96.82% 00:11:23.322 nvme0n4: ios=4520/0, merge=0/0, ticks=2557/0, in_queue=2557, util=96.75% 00:11:23.888 00:48:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:23.888 00:48:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:24.145 00:48:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:24.145 00:48:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:24.404 00:48:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:24.404 00:48:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:24.662 00:48:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:24.662 00:48:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:24.919 00:48:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:24.919 00:48:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2885558 00:11:24.919 00:48:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:24.920 00:48:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.853 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:25.854 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:25.854 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:25.854 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.854 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:25.854 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.854 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:25.854 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:25.854 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:25.854 nvmf hotplug test: fio failed as expected 00:11:25.854 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.112 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:26.112 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:26.112 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:26.112 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:26.112 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:26.112 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:26.112 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:26.112 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.112 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:26.112 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.112 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.112 rmmod nvme_tcp 00:11:26.112 rmmod nvme_fabrics 00:11:26.112 rmmod nvme_keyring 00:11:26.112 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.112 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:26.112 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:26.112 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2883391 ']' 00:11:26.112 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2883391 00:11:26.112 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2883391 ']' 00:11:26.112 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2883391 00:11:26.112 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:26.112 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.112 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2883391 00:11:26.370 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:26.371 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:26.371 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2883391' 00:11:26.371 killing process with pid 2883391 00:11:26.371 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2883391 00:11:26.371 00:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2883391 00:11:27.367 00:48:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:27.367 00:48:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:27.367 00:48:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:27.367 00:48:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:27.367 00:48:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:27.367 00:48:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:27.367 00:48:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:27.367 00:48:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:27.367 00:48:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:27.367 00:48:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.367 00:48:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.367 00:48:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.899 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:29.900 00:11:29.900 real 0m27.311s 00:11:29.900 user 1m35.578s 00:11:29.900 sys 0m7.309s 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 ************************************ 00:11:29.900 END TEST nvmf_fio_target 00:11:29.900 ************************************ 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 ************************************ 00:11:29.900 START TEST nvmf_bdevio 00:11:29.900 ************************************ 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:29.900 * Looking for test storage... 00:11:29.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:29.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.900 --rc genhtml_branch_coverage=1 00:11:29.900 --rc genhtml_function_coverage=1 00:11:29.900 --rc genhtml_legend=1 00:11:29.900 --rc geninfo_all_blocks=1 00:11:29.900 --rc geninfo_unexecuted_blocks=1 00:11:29.900 00:11:29.900 ' 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:29.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.900 --rc genhtml_branch_coverage=1 00:11:29.900 --rc genhtml_function_coverage=1 00:11:29.900 --rc genhtml_legend=1 00:11:29.900 --rc geninfo_all_blocks=1 00:11:29.900 --rc geninfo_unexecuted_blocks=1 00:11:29.900 00:11:29.900 ' 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:29.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.900 --rc genhtml_branch_coverage=1 00:11:29.900 --rc genhtml_function_coverage=1 00:11:29.900 --rc genhtml_legend=1 00:11:29.900 --rc geninfo_all_blocks=1 00:11:29.900 --rc geninfo_unexecuted_blocks=1 00:11:29.900 00:11:29.900 ' 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:29.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.900 --rc genhtml_branch_coverage=1 00:11:29.900 --rc genhtml_function_coverage=1 00:11:29.900 --rc genhtml_legend=1 00:11:29.900 --rc geninfo_all_blocks=1 00:11:29.900 --rc geninfo_unexecuted_blocks=1 00:11:29.900 00:11:29.900 ' 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:29.900 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:29.901 00:49:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.806 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:31.807 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:31.807 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:31.807 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:31.807 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:31.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:11:31.807 00:11:31.807 --- 10.0.0.2 ping statistics --- 00:11:31.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.807 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:11:31.807 00:11:31.807 --- 10.0.0.1 ping statistics --- 00:11:31.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.807 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2888559 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2888559 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2888559 ']' 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:31.807 00:49:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.807 [2024-12-06 00:49:04.459376] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:11:31.808 [2024-12-06 00:49:04.459511] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.066 [2024-12-06 00:49:04.613720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.066 [2024-12-06 00:49:04.760840] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.066 [2024-12-06 00:49:04.760944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.066 [2024-12-06 00:49:04.760972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.066 [2024-12-06 00:49:04.760998] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.066 [2024-12-06 00:49:04.761018] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.066 [2024-12-06 00:49:04.764138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:32.066 [2024-12-06 00:49:04.764200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:32.066 [2024-12-06 00:49:04.764255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.066 [2024-12-06 00:49:04.764264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:33.005 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:33.006 [2024-12-06 00:49:05.475028] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:33.006 Malloc0 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:33.006 [2024-12-06 00:49:05.591853] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:33.006 { 00:11:33.006 "params": { 00:11:33.006 "name": "Nvme$subsystem", 00:11:33.006 "trtype": "$TEST_TRANSPORT", 00:11:33.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:33.006 "adrfam": "ipv4", 00:11:33.006 "trsvcid": "$NVMF_PORT", 00:11:33.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:33.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:33.006 "hdgst": ${hdgst:-false}, 00:11:33.006 "ddgst": ${ddgst:-false} 00:11:33.006 }, 00:11:33.006 "method": "bdev_nvme_attach_controller" 00:11:33.006 } 00:11:33.006 EOF 00:11:33.006 )") 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:33.006 00:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:33.006 "params": { 00:11:33.006 "name": "Nvme1", 00:11:33.006 "trtype": "tcp", 00:11:33.006 "traddr": "10.0.0.2", 00:11:33.006 "adrfam": "ipv4", 00:11:33.006 "trsvcid": "4420", 00:11:33.006 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:33.006 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:33.006 "hdgst": false, 00:11:33.006 "ddgst": false 00:11:33.006 }, 00:11:33.006 "method": "bdev_nvme_attach_controller" 00:11:33.006 }' 00:11:33.006 [2024-12-06 00:49:05.677661] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:11:33.006 [2024-12-06 00:49:05.677802] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888714 ] 00:11:33.265 [2024-12-06 00:49:05.815367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:33.265 [2024-12-06 00:49:05.950634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.265 [2024-12-06 00:49:05.950683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.265 [2024-12-06 00:49:05.950687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.829 I/O targets: 00:11:33.829 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:33.829 00:11:33.829 00:11:33.829 CUnit - A unit testing framework for C - Version 2.1-3 00:11:33.829 http://cunit.sourceforge.net/ 00:11:33.829 00:11:33.829 00:11:33.829 Suite: bdevio tests on: Nvme1n1 00:11:33.829 Test: blockdev write read block ...passed 00:11:33.829 Test: blockdev write zeroes read block ...passed 00:11:33.829 Test: blockdev write zeroes read no split ...passed 00:11:34.087 Test: blockdev write zeroes read split ...passed 00:11:34.087 Test: blockdev write zeroes read split partial ...passed 00:11:34.087 Test: blockdev reset ...[2024-12-06 00:49:06.633717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:34.087 [2024-12-06 00:49:06.633933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:11:34.087 [2024-12-06 00:49:06.746212] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:34.087 passed 00:11:34.087 Test: blockdev write read 8 blocks ...passed 00:11:34.345 Test: blockdev write read size > 128k ...passed 00:11:34.345 Test: blockdev write read invalid size ...passed 00:11:34.345 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:34.345 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:34.345 Test: blockdev write read max offset ...passed 00:11:34.345 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:34.345 Test: blockdev writev readv 8 blocks ...passed 00:11:34.345 Test: blockdev writev readv 30 x 1block ...passed 00:11:34.345 Test: blockdev writev readv block ...passed 00:11:34.345 Test: blockdev writev readv size > 128k ...passed 00:11:34.345 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:34.345 Test: blockdev comparev and writev ...[2024-12-06 00:49:07.004972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:34.345 [2024-12-06 00:49:07.005050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:34.345 [2024-12-06 00:49:07.005090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:34.345 [2024-12-06 00:49:07.005117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:34.345 [2024-12-06 00:49:07.005578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:34.345 [2024-12-06 00:49:07.005611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:34.345 [2024-12-06 00:49:07.005647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:34.345 [2024-12-06 00:49:07.005672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:34.345 [2024-12-06 00:49:07.006116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:34.345 [2024-12-06 00:49:07.006187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:34.345 [2024-12-06 00:49:07.006237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:34.345 [2024-12-06 00:49:07.006265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:34.345 [2024-12-06 00:49:07.006696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:34.345 [2024-12-06 00:49:07.006730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:34.345 [2024-12-06 00:49:07.006769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:34.345 [2024-12-06 00:49:07.006796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:34.345 passed 00:11:34.603 Test: blockdev nvme passthru rw ...passed 00:11:34.603 Test: blockdev nvme passthru vendor specific ...[2024-12-06 00:49:07.089261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:34.603 [2024-12-06 00:49:07.089323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:34.603 [2024-12-06 00:49:07.089571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:34.603 [2024-12-06 00:49:07.089604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:34.603 [2024-12-06 00:49:07.089802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:34.604 [2024-12-06 00:49:07.089849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:34.604 [2024-12-06 00:49:07.090060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:34.604 [2024-12-06 00:49:07.090092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:34.604 passed 00:11:34.604 Test: blockdev nvme admin passthru ...passed 00:11:34.604 Test: blockdev copy ...passed 00:11:34.604 00:11:34.604 Run Summary: Type Total Ran Passed Failed Inactive 00:11:34.604 suites 1 1 n/a 0 0 00:11:34.604 tests 23 23 23 0 0 00:11:34.604 asserts 152 152 152 0 n/a 00:11:34.604 00:11:34.604 Elapsed time = 1.556 seconds 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:35.538 rmmod nvme_tcp 00:11:35.538 rmmod nvme_fabrics 00:11:35.538 rmmod nvme_keyring 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2888559 ']' 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2888559 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2888559 ']' 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2888559 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2888559 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2888559' 00:11:35.538 killing process with pid 2888559 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2888559 00:11:35.538 00:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2888559 00:11:36.913 00:49:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:36.913 00:49:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:36.913 00:49:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:36.913 00:49:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:36.913 00:49:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:36.913 00:49:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:36.913 00:49:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:36.913 00:49:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:36.913 00:49:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:36.913 00:49:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.913 00:49:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.913 00:49:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.814 00:49:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:38.815 00:11:38.815 real 0m9.380s 00:11:38.815 user 0m22.973s 00:11:38.815 sys 0m2.442s 00:11:38.815 00:49:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.815 00:49:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.815 ************************************ 00:11:38.815 END TEST nvmf_bdevio 00:11:38.815 ************************************ 00:11:38.815 00:49:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:38.815 00:11:38.815 real 4m32.715s 00:11:38.815 user 11m58.353s 00:11:38.815 sys 1m10.418s 00:11:38.815 00:49:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.815 00:49:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:38.815 ************************************ 00:11:38.815 END TEST nvmf_target_core 00:11:38.815 ************************************ 00:11:38.815 00:49:11 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:38.815 00:49:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:38.815 00:49:11 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.815 00:49:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:39.073 ************************************ 00:11:39.073 START TEST nvmf_target_extra 00:11:39.073 ************************************ 00:11:39.073 00:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:39.073 * Looking for test storage... 00:11:39.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:39.073 00:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:39.073 00:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:39.073 00:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:39.073 00:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:39.073 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.073 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.073 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.073 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.073 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.073 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.073 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.073 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.073 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.073 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.073 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.073 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:39.073 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:39.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.074 --rc genhtml_branch_coverage=1 00:11:39.074 --rc genhtml_function_coverage=1 00:11:39.074 --rc genhtml_legend=1 00:11:39.074 --rc geninfo_all_blocks=1 00:11:39.074 --rc geninfo_unexecuted_blocks=1 00:11:39.074 00:11:39.074 ' 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:39.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.074 --rc genhtml_branch_coverage=1 00:11:39.074 --rc genhtml_function_coverage=1 00:11:39.074 --rc genhtml_legend=1 00:11:39.074 --rc geninfo_all_blocks=1 00:11:39.074 --rc geninfo_unexecuted_blocks=1 00:11:39.074 00:11:39.074 ' 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:39.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.074 --rc genhtml_branch_coverage=1 00:11:39.074 --rc genhtml_function_coverage=1 00:11:39.074 --rc genhtml_legend=1 00:11:39.074 --rc geninfo_all_blocks=1 00:11:39.074 --rc geninfo_unexecuted_blocks=1 00:11:39.074 00:11:39.074 ' 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:39.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.074 --rc genhtml_branch_coverage=1 00:11:39.074 --rc genhtml_function_coverage=1 00:11:39.074 --rc genhtml_legend=1 00:11:39.074 --rc geninfo_all_blocks=1 00:11:39.074 --rc geninfo_unexecuted_blocks=1 00:11:39.074 00:11:39.074 ' 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:39.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:39.074 ************************************ 00:11:39.074 START TEST nvmf_example 00:11:39.074 ************************************ 00:11:39.074 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:39.333 * Looking for test storage... 00:11:39.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:39.333 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:39.333 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:11:39.333 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:39.333 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:39.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.334 --rc genhtml_branch_coverage=1 00:11:39.334 --rc genhtml_function_coverage=1 00:11:39.334 --rc genhtml_legend=1 00:11:39.334 --rc geninfo_all_blocks=1 00:11:39.334 --rc geninfo_unexecuted_blocks=1 00:11:39.334 00:11:39.334 ' 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:39.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.334 --rc genhtml_branch_coverage=1 00:11:39.334 --rc genhtml_function_coverage=1 00:11:39.334 --rc genhtml_legend=1 00:11:39.334 --rc geninfo_all_blocks=1 00:11:39.334 --rc geninfo_unexecuted_blocks=1 00:11:39.334 00:11:39.334 ' 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:39.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.334 --rc genhtml_branch_coverage=1 00:11:39.334 --rc genhtml_function_coverage=1 00:11:39.334 --rc genhtml_legend=1 00:11:39.334 --rc geninfo_all_blocks=1 00:11:39.334 --rc geninfo_unexecuted_blocks=1 00:11:39.334 00:11:39.334 ' 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:39.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.334 --rc genhtml_branch_coverage=1 00:11:39.334 --rc genhtml_function_coverage=1 00:11:39.334 --rc genhtml_legend=1 00:11:39.334 --rc geninfo_all_blocks=1 00:11:39.334 --rc geninfo_unexecuted_blocks=1 00:11:39.334 00:11:39.334 ' 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:39.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:39.334 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:41.234 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:41.234 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.234 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:41.234 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:41.235 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.235 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.492 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.492 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:41.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:11:41.493 00:11:41.493 --- 10.0.0.2 ping statistics --- 00:11:41.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.493 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:11:41.493 00:11:41.493 --- 10.0.0.1 ping statistics --- 00:11:41.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.493 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2891117 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2891117 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2891117 ']' 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.493 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.427 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.427 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:42.427 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:42.427 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.427 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.427 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.427 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.427 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.427 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.427 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:42.427 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.427 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.684 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.684 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:42.684 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:42.684 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.684 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.684 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.684 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:42.684 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:42.684 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.684 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.684 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.685 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.685 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.685 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.685 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.685 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:42.685 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:54.870 Initializing NVMe Controllers 00:11:54.870 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:54.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:54.870 Initialization complete. Launching workers. 00:11:54.870 ======================================================== 00:11:54.870 Latency(us) 00:11:54.870 Device Information : IOPS MiB/s Average min max 00:11:54.870 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11716.30 45.77 5464.21 1295.38 20055.29 00:11:54.870 ======================================================== 00:11:54.870 Total : 11716.30 45.77 5464.21 1295.38 20055.29 00:11:54.870 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:54.870 rmmod nvme_tcp 00:11:54.870 rmmod nvme_fabrics 00:11:54.870 rmmod nvme_keyring 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2891117 ']' 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2891117 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2891117 ']' 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2891117 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2891117 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2891117' 00:11:54.870 killing process with pid 2891117 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2891117 00:11:54.870 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2891117 00:11:54.870 nvmf threads initialize successfully 00:11:54.870 bdev subsystem init successfully 00:11:54.870 created a nvmf target service 00:11:54.870 create targets's poll groups done 00:11:54.870 all subsystems of target started 00:11:54.870 nvmf target is running 00:11:54.870 all subsystems of target stopped 00:11:54.870 destroy targets's poll groups done 00:11:54.870 destroyed the nvmf target service 00:11:54.870 bdev subsystem finish successfully 00:11:54.870 nvmf threads destroy successfully 00:11:54.870 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:54.870 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:54.870 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:54.870 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:54.870 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:54.870 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:54.870 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:54.870 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:54.870 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:54.870 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.870 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.870 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.245 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:56.245 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:56.245 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:56.245 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:56.245 00:11:56.245 real 0m17.203s 00:11:56.245 user 0m48.194s 00:11:56.245 sys 0m3.593s 00:11:56.245 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.245 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:56.245 ************************************ 00:11:56.245 END TEST nvmf_example 00:11:56.245 ************************************ 00:11:56.506 00:49:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:56.506 00:49:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:56.506 00:49:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.506 00:49:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:56.506 ************************************ 00:11:56.506 START TEST nvmf_filesystem 00:11:56.506 ************************************ 00:11:56.506 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:56.506 * Looking for test storage... 00:11:56.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.506 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:56.506 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:56.506 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:56.506 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:56.506 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.506 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.506 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.506 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.506 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.506 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.506 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.506 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.506 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.506 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:56.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.507 --rc genhtml_branch_coverage=1 00:11:56.507 --rc genhtml_function_coverage=1 00:11:56.507 --rc genhtml_legend=1 00:11:56.507 --rc geninfo_all_blocks=1 00:11:56.507 --rc geninfo_unexecuted_blocks=1 00:11:56.507 00:11:56.507 ' 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:56.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.507 --rc genhtml_branch_coverage=1 00:11:56.507 --rc genhtml_function_coverage=1 00:11:56.507 --rc genhtml_legend=1 00:11:56.507 --rc geninfo_all_blocks=1 00:11:56.507 --rc geninfo_unexecuted_blocks=1 00:11:56.507 00:11:56.507 ' 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:56.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.507 --rc genhtml_branch_coverage=1 00:11:56.507 --rc genhtml_function_coverage=1 00:11:56.507 --rc genhtml_legend=1 00:11:56.507 --rc geninfo_all_blocks=1 00:11:56.507 --rc geninfo_unexecuted_blocks=1 00:11:56.507 00:11:56.507 ' 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:56.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.507 --rc genhtml_branch_coverage=1 00:11:56.507 --rc genhtml_function_coverage=1 00:11:56.507 --rc genhtml_legend=1 00:11:56.507 --rc geninfo_all_blocks=1 00:11:56.507 --rc geninfo_unexecuted_blocks=1 00:11:56.507 00:11:56.507 ' 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:56.507 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:56.508 #define SPDK_CONFIG_H 00:11:56.508 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:56.508 #define SPDK_CONFIG_APPS 1 00:11:56.508 #define SPDK_CONFIG_ARCH native 00:11:56.508 #define SPDK_CONFIG_ASAN 1 00:11:56.508 #undef SPDK_CONFIG_AVAHI 00:11:56.508 #undef SPDK_CONFIG_CET 00:11:56.508 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:56.508 #define SPDK_CONFIG_COVERAGE 1 00:11:56.508 #define SPDK_CONFIG_CROSS_PREFIX 00:11:56.508 #undef SPDK_CONFIG_CRYPTO 00:11:56.508 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:56.508 #undef SPDK_CONFIG_CUSTOMOCF 00:11:56.508 #undef SPDK_CONFIG_DAOS 00:11:56.508 #define SPDK_CONFIG_DAOS_DIR 00:11:56.508 #define SPDK_CONFIG_DEBUG 1 00:11:56.508 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:56.508 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:56.508 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:56.508 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:56.508 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:56.508 #undef SPDK_CONFIG_DPDK_UADK 00:11:56.508 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:56.508 #define SPDK_CONFIG_EXAMPLES 1 00:11:56.508 #undef SPDK_CONFIG_FC 00:11:56.508 #define SPDK_CONFIG_FC_PATH 00:11:56.508 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:56.508 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:56.508 #define SPDK_CONFIG_FSDEV 1 00:11:56.508 #undef SPDK_CONFIG_FUSE 00:11:56.508 #undef SPDK_CONFIG_FUZZER 00:11:56.508 #define SPDK_CONFIG_FUZZER_LIB 00:11:56.508 #undef SPDK_CONFIG_GOLANG 00:11:56.508 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:56.508 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:56.508 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:56.508 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:56.508 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:56.508 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:56.508 #undef SPDK_CONFIG_HAVE_LZ4 00:11:56.508 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:56.508 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:56.508 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:56.508 #define SPDK_CONFIG_IDXD 1 00:11:56.508 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:56.508 #undef SPDK_CONFIG_IPSEC_MB 00:11:56.508 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:56.508 #define SPDK_CONFIG_ISAL 1 00:11:56.508 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:56.508 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:56.508 #define SPDK_CONFIG_LIBDIR 00:11:56.508 #undef SPDK_CONFIG_LTO 00:11:56.508 #define SPDK_CONFIG_MAX_LCORES 128 00:11:56.508 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:56.508 #define SPDK_CONFIG_NVME_CUSE 1 00:11:56.508 #undef SPDK_CONFIG_OCF 00:11:56.508 #define SPDK_CONFIG_OCF_PATH 00:11:56.508 #define SPDK_CONFIG_OPENSSL_PATH 00:11:56.508 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:56.508 #define SPDK_CONFIG_PGO_DIR 00:11:56.508 #undef SPDK_CONFIG_PGO_USE 00:11:56.508 #define SPDK_CONFIG_PREFIX /usr/local 00:11:56.508 #undef SPDK_CONFIG_RAID5F 00:11:56.508 #undef SPDK_CONFIG_RBD 00:11:56.508 #define SPDK_CONFIG_RDMA 1 00:11:56.508 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:56.508 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:56.508 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:56.508 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:56.508 #define SPDK_CONFIG_SHARED 1 00:11:56.508 #undef SPDK_CONFIG_SMA 00:11:56.508 #define SPDK_CONFIG_TESTS 1 00:11:56.508 #undef SPDK_CONFIG_TSAN 00:11:56.508 #define SPDK_CONFIG_UBLK 1 00:11:56.508 #define SPDK_CONFIG_UBSAN 1 00:11:56.508 #undef SPDK_CONFIG_UNIT_TESTS 00:11:56.508 #undef SPDK_CONFIG_URING 00:11:56.508 #define SPDK_CONFIG_URING_PATH 00:11:56.508 #undef SPDK_CONFIG_URING_ZNS 00:11:56.508 #undef SPDK_CONFIG_USDT 00:11:56.508 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:56.508 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:56.508 #undef SPDK_CONFIG_VFIO_USER 00:11:56.508 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:56.508 #define SPDK_CONFIG_VHOST 1 00:11:56.508 #define SPDK_CONFIG_VIRTIO 1 00:11:56.508 #undef SPDK_CONFIG_VTUNE 00:11:56.508 #define SPDK_CONFIG_VTUNE_DIR 00:11:56.508 #define SPDK_CONFIG_WERROR 1 00:11:56.508 #define SPDK_CONFIG_WPDK_DIR 00:11:56.508 #undef SPDK_CONFIG_XNVME 00:11:56.508 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.508 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:56.509 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:56.510 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:56.511 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:56.770 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:56.770 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:56.770 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:56.770 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:56.770 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:56.770 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:11:56.770 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:56.770 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:56.770 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:56.770 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:56.770 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:56.770 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:56.770 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:56.770 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2892953 ]] 00:11:56.770 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2892953 00:11:56.770 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:56.770 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:56.770 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:56.770 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:56.770 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:56.770 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.wT7TzV 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.wT7TzV/tests/target /tmp/spdk.wT7TzV 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=55049658368 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988528128 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6938869760 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30982897664 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375269376 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22437888 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993854464 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=409600 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:56.771 * Looking for test storage... 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=55049658368 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9153462272 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.771 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:56.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.772 --rc genhtml_branch_coverage=1 00:11:56.772 --rc genhtml_function_coverage=1 00:11:56.772 --rc genhtml_legend=1 00:11:56.772 --rc geninfo_all_blocks=1 00:11:56.772 --rc geninfo_unexecuted_blocks=1 00:11:56.772 00:11:56.772 ' 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:56.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.772 --rc genhtml_branch_coverage=1 00:11:56.772 --rc genhtml_function_coverage=1 00:11:56.772 --rc genhtml_legend=1 00:11:56.772 --rc geninfo_all_blocks=1 00:11:56.772 --rc geninfo_unexecuted_blocks=1 00:11:56.772 00:11:56.772 ' 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:56.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.772 --rc genhtml_branch_coverage=1 00:11:56.772 --rc genhtml_function_coverage=1 00:11:56.772 --rc genhtml_legend=1 00:11:56.772 --rc geninfo_all_blocks=1 00:11:56.772 --rc geninfo_unexecuted_blocks=1 00:11:56.772 00:11:56.772 ' 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:56.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.772 --rc genhtml_branch_coverage=1 00:11:56.772 --rc genhtml_function_coverage=1 00:11:56.772 --rc genhtml_legend=1 00:11:56.772 --rc geninfo_all_blocks=1 00:11:56.772 --rc geninfo_unexecuted_blocks=1 00:11:56.772 00:11:56.772 ' 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:56.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:56.772 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:58.674 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:58.674 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:58.674 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:58.674 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:58.675 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:58.675 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:58.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:11:58.933 00:11:58.933 --- 10.0.0.2 ping statistics --- 00:11:58.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.933 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:58.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:11:58.933 00:11:58.933 --- 10.0.0.1 ping statistics --- 00:11:58.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.933 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.933 ************************************ 00:11:58.933 START TEST nvmf_filesystem_no_in_capsule 00:11:58.933 ************************************ 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:58.933 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:58.934 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:58.934 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:58.934 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:58.934 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.934 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2894711 00:11:58.934 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:58.934 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2894711 00:11:58.934 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2894711 ']' 00:11:58.934 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.934 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.934 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.934 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.934 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.934 [2024-12-06 00:49:31.628840] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:11:58.934 [2024-12-06 00:49:31.628964] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.192 [2024-12-06 00:49:31.780052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.449 [2024-12-06 00:49:31.912693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.449 [2024-12-06 00:49:31.912780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.449 [2024-12-06 00:49:31.912807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.449 [2024-12-06 00:49:31.912842] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.449 [2024-12-06 00:49:31.912863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.449 [2024-12-06 00:49:31.915722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.449 [2024-12-06 00:49:31.915792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.449 [2024-12-06 00:49:31.915892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.449 [2024-12-06 00:49:31.915897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.015 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.015 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:00.015 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:00.015 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:00.015 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.015 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.015 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:00.015 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:00.015 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.015 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.016 [2024-12-06 00:49:32.673953] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.016 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.016 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:00.016 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.016 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.581 Malloc1 00:12:00.581 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.581 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:00.581 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.581 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.581 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.581 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:00.581 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.581 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.581 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.581 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.581 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.581 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.581 [2024-12-06 00:49:33.268681] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.582 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.582 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:00.582 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:00.582 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:00.582 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:00.582 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:00.582 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:00.582 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.582 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.582 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.582 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:00.582 { 00:12:00.582 "name": "Malloc1", 00:12:00.582 "aliases": [ 00:12:00.582 "c5d3ce3c-d32e-4a36-acf8-96d21d1011b5" 00:12:00.582 ], 00:12:00.582 "product_name": "Malloc disk", 00:12:00.582 "block_size": 512, 00:12:00.582 "num_blocks": 1048576, 00:12:00.582 "uuid": "c5d3ce3c-d32e-4a36-acf8-96d21d1011b5", 00:12:00.582 "assigned_rate_limits": { 00:12:00.582 "rw_ios_per_sec": 0, 00:12:00.582 "rw_mbytes_per_sec": 0, 00:12:00.582 "r_mbytes_per_sec": 0, 00:12:00.582 "w_mbytes_per_sec": 0 00:12:00.582 }, 00:12:00.582 "claimed": true, 00:12:00.582 "claim_type": "exclusive_write", 00:12:00.582 "zoned": false, 00:12:00.582 "supported_io_types": { 00:12:00.582 "read": true, 00:12:00.582 "write": true, 00:12:00.582 "unmap": true, 00:12:00.582 "flush": true, 00:12:00.582 "reset": true, 00:12:00.582 "nvme_admin": false, 00:12:00.582 "nvme_io": false, 00:12:00.582 "nvme_io_md": false, 00:12:00.582 "write_zeroes": true, 00:12:00.582 "zcopy": true, 00:12:00.582 "get_zone_info": false, 00:12:00.582 "zone_management": false, 00:12:00.582 "zone_append": false, 00:12:00.582 "compare": false, 00:12:00.582 "compare_and_write": false, 00:12:00.582 "abort": true, 00:12:00.582 "seek_hole": false, 00:12:00.582 "seek_data": false, 00:12:00.582 "copy": true, 00:12:00.582 "nvme_iov_md": false 00:12:00.582 }, 00:12:00.582 "memory_domains": [ 00:12:00.582 { 00:12:00.582 "dma_device_id": "system", 00:12:00.582 "dma_device_type": 1 00:12:00.582 }, 00:12:00.582 { 00:12:00.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.582 "dma_device_type": 2 00:12:00.582 } 00:12:00.582 ], 00:12:00.582 "driver_specific": {} 00:12:00.582 } 00:12:00.582 ]' 00:12:00.839 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:00.839 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:00.839 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:00.839 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:00.839 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:00.839 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:00.839 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:00.839 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:01.404 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:01.404 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:01.404 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.404 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:01.404 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:03.301 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:03.301 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:03.301 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.301 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:03.301 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.301 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:03.301 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:03.301 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:03.558 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:03.559 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:03.559 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:03.559 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:03.559 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:03.559 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:03.559 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:03.559 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:03.559 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:03.559 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:03.877 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:05.251 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:05.251 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:05.251 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:05.251 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.251 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.251 ************************************ 00:12:05.251 START TEST filesystem_ext4 00:12:05.251 ************************************ 00:12:05.251 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:05.251 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:05.251 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:05.251 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:05.251 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:05.251 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:05.251 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:05.251 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:05.251 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:05.251 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:05.251 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:05.251 mke2fs 1.47.0 (5-Feb-2023) 00:12:05.251 Discarding device blocks: 0/522240 done 00:12:05.251 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:05.251 Filesystem UUID: 6305efd3-22a8-4fe7-8f43-4251a690eb26 00:12:05.251 Superblock backups stored on blocks: 00:12:05.251 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:05.251 00:12:05.251 Allocating group tables: 0/64 done 00:12:05.251 Writing inode tables: 0/64 done 00:12:05.251 Creating journal (8192 blocks): done 00:12:05.509 Writing superblocks and filesystem accounting information: 0/64 done 00:12:05.509 00:12:05.509 00:49:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:05.509 00:49:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:10.771 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:10.771 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:10.771 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:10.771 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:10.771 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:10.771 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:10.771 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2894711 00:12:10.771 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:10.771 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:10.771 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:10.771 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:10.771 00:12:10.771 real 0m5.891s 00:12:10.771 user 0m0.014s 00:12:10.771 sys 0m0.072s 00:12:10.771 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.771 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:10.771 ************************************ 00:12:10.771 END TEST filesystem_ext4 00:12:10.771 ************************************ 00:12:10.771 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:10.771 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:10.771 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.771 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.028 ************************************ 00:12:11.028 START TEST filesystem_btrfs 00:12:11.028 ************************************ 00:12:11.028 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:11.028 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:11.028 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:11.028 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:11.028 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:11.028 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:11.028 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:11.028 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:11.028 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:11.028 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:11.028 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:11.028 btrfs-progs v6.8.1 00:12:11.028 See https://btrfs.readthedocs.io for more information. 00:12:11.028 00:12:11.028 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:11.028 NOTE: several default settings have changed in version 5.15, please make sure 00:12:11.028 this does not affect your deployments: 00:12:11.028 - DUP for metadata (-m dup) 00:12:11.028 - enabled no-holes (-O no-holes) 00:12:11.028 - enabled free-space-tree (-R free-space-tree) 00:12:11.028 00:12:11.028 Label: (null) 00:12:11.028 UUID: a8d1706a-0866-4d4d-983c-4f8a4bbdaaeb 00:12:11.028 Node size: 16384 00:12:11.028 Sector size: 4096 (CPU page size: 4096) 00:12:11.028 Filesystem size: 510.00MiB 00:12:11.028 Block group profiles: 00:12:11.028 Data: single 8.00MiB 00:12:11.028 Metadata: DUP 32.00MiB 00:12:11.028 System: DUP 8.00MiB 00:12:11.028 SSD detected: yes 00:12:11.028 Zoned device: no 00:12:11.028 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:11.028 Checksum: crc32c 00:12:11.028 Number of devices: 1 00:12:11.028 Devices: 00:12:11.028 ID SIZE PATH 00:12:11.028 1 510.00MiB /dev/nvme0n1p1 00:12:11.028 00:12:11.028 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:11.028 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:11.590 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:11.590 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:11.590 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:11.590 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:11.590 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:11.590 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:11.590 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2894711 00:12:11.590 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:11.590 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:11.590 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:11.590 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:11.847 00:12:11.847 real 0m0.799s 00:12:11.847 user 0m0.017s 00:12:11.847 sys 0m0.108s 00:12:11.847 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.847 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:11.847 ************************************ 00:12:11.847 END TEST filesystem_btrfs 00:12:11.847 ************************************ 00:12:11.847 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:11.847 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:11.847 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.847 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.847 ************************************ 00:12:11.847 START TEST filesystem_xfs 00:12:11.847 ************************************ 00:12:11.847 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:11.847 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:11.847 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:11.847 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:11.847 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:11.847 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:11.847 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:11.847 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:11.847 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:11.847 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:11.847 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:11.847 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:11.847 = sectsz=512 attr=2, projid32bit=1 00:12:11.847 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:11.847 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:11.848 data = bsize=4096 blocks=130560, imaxpct=25 00:12:11.848 = sunit=0 swidth=0 blks 00:12:11.848 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:11.848 log =internal log bsize=4096 blocks=16384, version=2 00:12:11.848 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:11.848 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:13.216 Discarding blocks...Done. 00:12:13.216 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:13.216 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:15.745 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:15.745 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:15.745 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:15.745 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:15.745 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:15.745 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:15.745 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2894711 00:12:15.745 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:15.745 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:15.745 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:15.745 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:15.745 00:12:15.745 real 0m3.579s 00:12:15.745 user 0m0.018s 00:12:15.745 sys 0m0.060s 00:12:15.745 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.745 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:15.745 ************************************ 00:12:15.745 END TEST filesystem_xfs 00:12:15.745 ************************************ 00:12:15.745 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2894711 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2894711 ']' 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2894711 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2894711 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2894711' 00:12:15.745 killing process with pid 2894711 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2894711 00:12:15.745 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2894711 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:18.279 00:12:18.279 real 0m19.252s 00:12:18.279 user 1m12.792s 00:12:18.279 sys 0m2.534s 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.279 ************************************ 00:12:18.279 END TEST nvmf_filesystem_no_in_capsule 00:12:18.279 ************************************ 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:18.279 ************************************ 00:12:18.279 START TEST nvmf_filesystem_in_capsule 00:12:18.279 ************************************ 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2897201 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2897201 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2897201 ']' 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.279 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.279 [2024-12-06 00:49:50.938830] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:12:18.280 [2024-12-06 00:49:50.938989] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.538 [2024-12-06 00:49:51.082000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.538 [2024-12-06 00:49:51.205467] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.538 [2024-12-06 00:49:51.205549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.538 [2024-12-06 00:49:51.205576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.538 [2024-12-06 00:49:51.205602] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.538 [2024-12-06 00:49:51.205622] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.538 [2024-12-06 00:49:51.208488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.538 [2024-12-06 00:49:51.208559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.538 [2024-12-06 00:49:51.208649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.538 [2024-12-06 00:49:51.208656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.475 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.475 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:19.475 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:19.475 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:19.475 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.475 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.475 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:19.475 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:19.475 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.475 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.475 [2024-12-06 00:49:51.967538] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.475 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.475 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:19.475 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.475 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.041 Malloc1 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.041 [2024-12-06 00:49:52.558935] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.041 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:20.041 { 00:12:20.041 "name": "Malloc1", 00:12:20.041 "aliases": [ 00:12:20.041 "c8314bb6-59a5-4a8c-94c3-24fa8e6734ff" 00:12:20.042 ], 00:12:20.042 "product_name": "Malloc disk", 00:12:20.042 "block_size": 512, 00:12:20.042 "num_blocks": 1048576, 00:12:20.042 "uuid": "c8314bb6-59a5-4a8c-94c3-24fa8e6734ff", 00:12:20.042 "assigned_rate_limits": { 00:12:20.042 "rw_ios_per_sec": 0, 00:12:20.042 "rw_mbytes_per_sec": 0, 00:12:20.042 "r_mbytes_per_sec": 0, 00:12:20.042 "w_mbytes_per_sec": 0 00:12:20.042 }, 00:12:20.042 "claimed": true, 00:12:20.042 "claim_type": "exclusive_write", 00:12:20.042 "zoned": false, 00:12:20.042 "supported_io_types": { 00:12:20.042 "read": true, 00:12:20.042 "write": true, 00:12:20.042 "unmap": true, 00:12:20.042 "flush": true, 00:12:20.042 "reset": true, 00:12:20.042 "nvme_admin": false, 00:12:20.042 "nvme_io": false, 00:12:20.042 "nvme_io_md": false, 00:12:20.042 "write_zeroes": true, 00:12:20.042 "zcopy": true, 00:12:20.042 "get_zone_info": false, 00:12:20.042 "zone_management": false, 00:12:20.042 "zone_append": false, 00:12:20.042 "compare": false, 00:12:20.042 "compare_and_write": false, 00:12:20.042 "abort": true, 00:12:20.042 "seek_hole": false, 00:12:20.042 "seek_data": false, 00:12:20.042 "copy": true, 00:12:20.042 "nvme_iov_md": false 00:12:20.042 }, 00:12:20.042 "memory_domains": [ 00:12:20.042 { 00:12:20.042 "dma_device_id": "system", 00:12:20.042 "dma_device_type": 1 00:12:20.042 }, 00:12:20.042 { 00:12:20.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.042 "dma_device_type": 2 00:12:20.042 } 00:12:20.042 ], 00:12:20.042 "driver_specific": {} 00:12:20.042 } 00:12:20.042 ]' 00:12:20.042 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:20.042 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:20.042 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:20.042 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:20.042 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:20.042 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:20.042 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:20.042 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.608 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:20.608 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:20.608 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:20.608 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:20.608 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:23.135 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:23.135 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:23.135 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.135 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:23.135 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.135 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:23.135 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:23.136 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:23.136 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:23.136 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:23.136 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:23.136 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:23.136 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:23.136 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:23.136 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:23.136 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:23.136 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:23.136 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:23.701 00:49:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:24.634 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:24.634 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:24.634 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:24.634 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.634 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.634 ************************************ 00:12:24.634 START TEST filesystem_in_capsule_ext4 00:12:24.634 ************************************ 00:12:24.634 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:24.634 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:24.634 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:24.634 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:24.634 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:24.635 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:24.635 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:24.635 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:24.635 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:24.635 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:24.635 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:24.635 mke2fs 1.47.0 (5-Feb-2023) 00:12:24.635 Discarding device blocks: 0/522240 done 00:12:24.635 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:24.635 Filesystem UUID: 29ab4919-90c6-4d05-abe7-c8195d7b60d3 00:12:24.635 Superblock backups stored on blocks: 00:12:24.635 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:24.635 00:12:24.635 Allocating group tables: 0/64 done 00:12:24.635 Writing inode tables: 0/64 done 00:12:24.893 Creating journal (8192 blocks): done 00:12:27.089 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:12:27.089 00:12:27.089 00:49:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:27.089 00:49:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2897201 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:33.676 00:12:33.676 real 0m8.488s 00:12:33.676 user 0m0.018s 00:12:33.676 sys 0m0.068s 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:33.676 ************************************ 00:12:33.676 END TEST filesystem_in_capsule_ext4 00:12:33.676 ************************************ 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:33.676 ************************************ 00:12:33.676 START TEST filesystem_in_capsule_btrfs 00:12:33.676 ************************************ 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:33.676 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:33.676 btrfs-progs v6.8.1 00:12:33.676 See https://btrfs.readthedocs.io for more information. 00:12:33.676 00:12:33.676 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:33.676 NOTE: several default settings have changed in version 5.15, please make sure 00:12:33.676 this does not affect your deployments: 00:12:33.676 - DUP for metadata (-m dup) 00:12:33.676 - enabled no-holes (-O no-holes) 00:12:33.676 - enabled free-space-tree (-R free-space-tree) 00:12:33.676 00:12:33.676 Label: (null) 00:12:33.676 UUID: 0b1672d9-f4e6-475f-9bc7-1c7c6f04c800 00:12:33.676 Node size: 16384 00:12:33.676 Sector size: 4096 (CPU page size: 4096) 00:12:33.676 Filesystem size: 510.00MiB 00:12:33.676 Block group profiles: 00:12:33.677 Data: single 8.00MiB 00:12:33.677 Metadata: DUP 32.00MiB 00:12:33.677 System: DUP 8.00MiB 00:12:33.677 SSD detected: yes 00:12:33.677 Zoned device: no 00:12:33.677 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:33.677 Checksum: crc32c 00:12:33.677 Number of devices: 1 00:12:33.677 Devices: 00:12:33.677 ID SIZE PATH 00:12:33.677 1 510.00MiB /dev/nvme0n1p1 00:12:33.677 00:12:33.677 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:33.677 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:33.677 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:33.677 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:33.677 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:33.677 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:33.677 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:33.677 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:33.677 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2897201 00:12:33.677 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:33.677 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:33.677 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:33.677 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:33.677 00:12:33.677 real 0m0.666s 00:12:33.677 user 0m0.023s 00:12:33.677 sys 0m0.094s 00:12:33.677 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.677 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:33.677 ************************************ 00:12:33.677 END TEST filesystem_in_capsule_btrfs 00:12:33.677 ************************************ 00:12:33.677 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:33.677 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:33.677 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.677 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:33.935 ************************************ 00:12:33.935 START TEST filesystem_in_capsule_xfs 00:12:33.935 ************************************ 00:12:33.935 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:33.935 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:33.935 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:33.935 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:33.935 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:33.935 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:33.935 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:33.935 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:33.935 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:33.935 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:33.935 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:33.935 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:33.935 = sectsz=512 attr=2, projid32bit=1 00:12:33.935 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:33.935 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:33.935 data = bsize=4096 blocks=130560, imaxpct=25 00:12:33.935 = sunit=0 swidth=0 blks 00:12:33.935 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:33.935 log =internal log bsize=4096 blocks=16384, version=2 00:12:33.935 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:33.935 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:34.870 Discarding blocks...Done. 00:12:34.870 00:50:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:34.870 00:50:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:37.400 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:37.400 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:37.400 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:37.400 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:37.400 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:37.400 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:37.400 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2897201 00:12:37.400 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:37.400 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:37.400 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:37.400 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:37.400 00:12:37.400 real 0m3.464s 00:12:37.400 user 0m0.011s 00:12:37.400 sys 0m0.068s 00:12:37.400 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.400 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:37.400 ************************************ 00:12:37.400 END TEST filesystem_in_capsule_xfs 00:12:37.400 ************************************ 00:12:37.400 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:37.658 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:37.658 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.916 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.916 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:37.916 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:37.917 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.917 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:37.917 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.917 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:37.917 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.917 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.917 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.917 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.917 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:37.917 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2897201 00:12:37.917 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2897201 ']' 00:12:37.917 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2897201 00:12:37.917 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:37.917 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.917 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2897201 00:12:37.917 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.917 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.917 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2897201' 00:12:37.917 killing process with pid 2897201 00:12:37.917 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2897201 00:12:37.917 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2897201 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:40.445 00:12:40.445 real 0m21.997s 00:12:40.445 user 1m23.726s 00:12:40.445 sys 0m2.579s 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:40.445 ************************************ 00:12:40.445 END TEST nvmf_filesystem_in_capsule 00:12:40.445 ************************************ 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:40.445 rmmod nvme_tcp 00:12:40.445 rmmod nvme_fabrics 00:12:40.445 rmmod nvme_keyring 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.445 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.407 00:50:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:42.407 00:12:42.407 real 0m45.978s 00:12:42.407 user 2m37.614s 00:12:42.407 sys 0m6.759s 00:12:42.407 00:50:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.407 00:50:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:42.407 ************************************ 00:12:42.407 END TEST nvmf_filesystem 00:12:42.407 ************************************ 00:12:42.407 00:50:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:42.407 00:50:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:42.407 00:50:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.407 00:50:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:42.407 ************************************ 00:12:42.407 START TEST nvmf_target_discovery 00:12:42.407 ************************************ 00:12:42.407 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:42.407 * Looking for test storage... 00:12:42.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.407 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:42.407 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:42.407 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:42.665 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:42.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.666 --rc genhtml_branch_coverage=1 00:12:42.666 --rc genhtml_function_coverage=1 00:12:42.666 --rc genhtml_legend=1 00:12:42.666 --rc geninfo_all_blocks=1 00:12:42.666 --rc geninfo_unexecuted_blocks=1 00:12:42.666 00:12:42.666 ' 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:42.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.666 --rc genhtml_branch_coverage=1 00:12:42.666 --rc genhtml_function_coverage=1 00:12:42.666 --rc genhtml_legend=1 00:12:42.666 --rc geninfo_all_blocks=1 00:12:42.666 --rc geninfo_unexecuted_blocks=1 00:12:42.666 00:12:42.666 ' 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:42.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.666 --rc genhtml_branch_coverage=1 00:12:42.666 --rc genhtml_function_coverage=1 00:12:42.666 --rc genhtml_legend=1 00:12:42.666 --rc geninfo_all_blocks=1 00:12:42.666 --rc geninfo_unexecuted_blocks=1 00:12:42.666 00:12:42.666 ' 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:42.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.666 --rc genhtml_branch_coverage=1 00:12:42.666 --rc genhtml_function_coverage=1 00:12:42.666 --rc genhtml_legend=1 00:12:42.666 --rc geninfo_all_blocks=1 00:12:42.666 --rc geninfo_unexecuted_blocks=1 00:12:42.666 00:12:42.666 ' 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:42.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:42.666 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.196 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.196 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:45.196 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:45.196 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:45.196 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:45.196 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:45.196 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:45.196 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:45.196 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:45.196 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:45.196 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:45.197 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:45.197 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:45.197 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:45.197 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:45.197 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:45.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:12:45.198 00:12:45.198 --- 10.0.0.2 ping statistics --- 00:12:45.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.198 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:12:45.198 00:12:45.198 --- 10.0.0.1 ping statistics --- 00:12:45.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.198 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2901904 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2901904 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2901904 ']' 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.198 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.198 [2024-12-06 00:50:17.547135] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:12:45.198 [2024-12-06 00:50:17.547277] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.198 [2024-12-06 00:50:17.689127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.198 [2024-12-06 00:50:17.823614] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.198 [2024-12-06 00:50:17.823715] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.198 [2024-12-06 00:50:17.823742] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.198 [2024-12-06 00:50:17.823776] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.198 [2024-12-06 00:50:17.823795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.198 [2024-12-06 00:50:17.826862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.198 [2024-12-06 00:50:17.826926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.198 [2024-12-06 00:50:17.827006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.198 [2024-12-06 00:50:17.827011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.135 [2024-12-06 00:50:18.556235] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.135 Null1 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.135 [2024-12-06 00:50:18.605503] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.135 Null2 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.135 Null3 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:46.135 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.136 Null4 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.136 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:46.394 00:12:46.395 Discovery Log Number of Records 6, Generation counter 6 00:12:46.395 =====Discovery Log Entry 0====== 00:12:46.395 trtype: tcp 00:12:46.395 adrfam: ipv4 00:12:46.395 subtype: current discovery subsystem 00:12:46.395 treq: not required 00:12:46.395 portid: 0 00:12:46.395 trsvcid: 4420 00:12:46.395 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:46.395 traddr: 10.0.0.2 00:12:46.395 eflags: explicit discovery connections, duplicate discovery information 00:12:46.395 sectype: none 00:12:46.395 =====Discovery Log Entry 1====== 00:12:46.395 trtype: tcp 00:12:46.395 adrfam: ipv4 00:12:46.395 subtype: nvme subsystem 00:12:46.395 treq: not required 00:12:46.395 portid: 0 00:12:46.395 trsvcid: 4420 00:12:46.395 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:46.395 traddr: 10.0.0.2 00:12:46.395 eflags: none 00:12:46.395 sectype: none 00:12:46.395 =====Discovery Log Entry 2====== 00:12:46.395 trtype: tcp 00:12:46.395 adrfam: ipv4 00:12:46.395 subtype: nvme subsystem 00:12:46.395 treq: not required 00:12:46.395 portid: 0 00:12:46.395 trsvcid: 4420 00:12:46.395 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:46.395 traddr: 10.0.0.2 00:12:46.395 eflags: none 00:12:46.395 sectype: none 00:12:46.395 =====Discovery Log Entry 3====== 00:12:46.395 trtype: tcp 00:12:46.395 adrfam: ipv4 00:12:46.395 subtype: nvme subsystem 00:12:46.395 treq: not required 00:12:46.395 portid: 0 00:12:46.395 trsvcid: 4420 00:12:46.395 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:46.395 traddr: 10.0.0.2 00:12:46.395 eflags: none 00:12:46.395 sectype: none 00:12:46.395 =====Discovery Log Entry 4====== 00:12:46.395 trtype: tcp 00:12:46.395 adrfam: ipv4 00:12:46.395 subtype: nvme subsystem 00:12:46.395 treq: not required 00:12:46.395 portid: 0 00:12:46.395 trsvcid: 4420 00:12:46.395 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:46.395 traddr: 10.0.0.2 00:12:46.395 eflags: none 00:12:46.395 sectype: none 00:12:46.395 =====Discovery Log Entry 5====== 00:12:46.395 trtype: tcp 00:12:46.395 adrfam: ipv4 00:12:46.395 subtype: discovery subsystem referral 00:12:46.395 treq: not required 00:12:46.395 portid: 0 00:12:46.395 trsvcid: 4430 00:12:46.395 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:46.395 traddr: 10.0.0.2 00:12:46.395 eflags: none 00:12:46.395 sectype: none 00:12:46.395 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:46.395 Perform nvmf subsystem discovery via RPC 00:12:46.395 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:46.395 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.395 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.395 [ 00:12:46.395 { 00:12:46.395 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:46.395 "subtype": "Discovery", 00:12:46.395 "listen_addresses": [ 00:12:46.395 { 00:12:46.395 "trtype": "TCP", 00:12:46.395 "adrfam": "IPv4", 00:12:46.395 "traddr": "10.0.0.2", 00:12:46.395 "trsvcid": "4420" 00:12:46.395 } 00:12:46.395 ], 00:12:46.395 "allow_any_host": true, 00:12:46.395 "hosts": [] 00:12:46.395 }, 00:12:46.395 { 00:12:46.395 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:46.395 "subtype": "NVMe", 00:12:46.395 "listen_addresses": [ 00:12:46.395 { 00:12:46.395 "trtype": "TCP", 00:12:46.395 "adrfam": "IPv4", 00:12:46.395 "traddr": "10.0.0.2", 00:12:46.395 "trsvcid": "4420" 00:12:46.395 } 00:12:46.395 ], 00:12:46.395 "allow_any_host": true, 00:12:46.395 "hosts": [], 00:12:46.395 "serial_number": "SPDK00000000000001", 00:12:46.395 "model_number": "SPDK bdev Controller", 00:12:46.395 "max_namespaces": 32, 00:12:46.395 "min_cntlid": 1, 00:12:46.395 "max_cntlid": 65519, 00:12:46.395 "namespaces": [ 00:12:46.395 { 00:12:46.395 "nsid": 1, 00:12:46.395 "bdev_name": "Null1", 00:12:46.395 "name": "Null1", 00:12:46.395 "nguid": "DA0B19AA794A44EE82F0E50B6CCB5D48", 00:12:46.395 "uuid": "da0b19aa-794a-44ee-82f0-e50b6ccb5d48" 00:12:46.395 } 00:12:46.395 ] 00:12:46.395 }, 00:12:46.395 { 00:12:46.395 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:46.395 "subtype": "NVMe", 00:12:46.395 "listen_addresses": [ 00:12:46.395 { 00:12:46.395 "trtype": "TCP", 00:12:46.395 "adrfam": "IPv4", 00:12:46.395 "traddr": "10.0.0.2", 00:12:46.395 "trsvcid": "4420" 00:12:46.395 } 00:12:46.395 ], 00:12:46.395 "allow_any_host": true, 00:12:46.395 "hosts": [], 00:12:46.395 "serial_number": "SPDK00000000000002", 00:12:46.395 "model_number": "SPDK bdev Controller", 00:12:46.395 "max_namespaces": 32, 00:12:46.395 "min_cntlid": 1, 00:12:46.395 "max_cntlid": 65519, 00:12:46.395 "namespaces": [ 00:12:46.395 { 00:12:46.395 "nsid": 1, 00:12:46.395 "bdev_name": "Null2", 00:12:46.395 "name": "Null2", 00:12:46.395 "nguid": "C4096D7F73A04306B6AC6CA1C0805CA6", 00:12:46.395 "uuid": "c4096d7f-73a0-4306-b6ac-6ca1c0805ca6" 00:12:46.395 } 00:12:46.395 ] 00:12:46.395 }, 00:12:46.395 { 00:12:46.395 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:46.395 "subtype": "NVMe", 00:12:46.395 "listen_addresses": [ 00:12:46.395 { 00:12:46.395 "trtype": "TCP", 00:12:46.395 "adrfam": "IPv4", 00:12:46.395 "traddr": "10.0.0.2", 00:12:46.395 "trsvcid": "4420" 00:12:46.395 } 00:12:46.395 ], 00:12:46.395 "allow_any_host": true, 00:12:46.395 "hosts": [], 00:12:46.395 "serial_number": "SPDK00000000000003", 00:12:46.395 "model_number": "SPDK bdev Controller", 00:12:46.395 "max_namespaces": 32, 00:12:46.395 "min_cntlid": 1, 00:12:46.395 "max_cntlid": 65519, 00:12:46.395 "namespaces": [ 00:12:46.395 { 00:12:46.395 "nsid": 1, 00:12:46.395 "bdev_name": "Null3", 00:12:46.395 "name": "Null3", 00:12:46.395 "nguid": "FB38633FF9584B27AA33B63A90B23E75", 00:12:46.395 "uuid": "fb38633f-f958-4b27-aa33-b63a90b23e75" 00:12:46.395 } 00:12:46.395 ] 00:12:46.395 }, 00:12:46.395 { 00:12:46.395 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:46.395 "subtype": "NVMe", 00:12:46.395 "listen_addresses": [ 00:12:46.395 { 00:12:46.395 "trtype": "TCP", 00:12:46.395 "adrfam": "IPv4", 00:12:46.395 "traddr": "10.0.0.2", 00:12:46.395 "trsvcid": "4420" 00:12:46.396 } 00:12:46.396 ], 00:12:46.396 "allow_any_host": true, 00:12:46.396 "hosts": [], 00:12:46.396 "serial_number": "SPDK00000000000004", 00:12:46.396 "model_number": "SPDK bdev Controller", 00:12:46.396 "max_namespaces": 32, 00:12:46.396 "min_cntlid": 1, 00:12:46.396 "max_cntlid": 65519, 00:12:46.396 "namespaces": [ 00:12:46.396 { 00:12:46.396 "nsid": 1, 00:12:46.396 "bdev_name": "Null4", 00:12:46.396 "name": "Null4", 00:12:46.396 "nguid": "6BC3AE0C7D9C451AAE5CE38FAEE27D3D", 00:12:46.396 "uuid": "6bc3ae0c-7d9c-451a-ae5c-e38faee27d3d" 00:12:46.396 } 00:12:46.396 ] 00:12:46.396 } 00:12:46.396 ] 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.396 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:46.396 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:46.396 rmmod nvme_tcp 00:12:46.396 rmmod nvme_fabrics 00:12:46.396 rmmod nvme_keyring 00:12:46.655 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:46.655 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:46.655 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:46.655 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2901904 ']' 00:12:46.655 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2901904 00:12:46.655 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2901904 ']' 00:12:46.655 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2901904 00:12:46.655 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:46.655 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.655 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2901904 00:12:46.655 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.655 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.655 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2901904' 00:12:46.655 killing process with pid 2901904 00:12:46.655 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2901904 00:12:46.655 00:50:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2901904 00:12:47.590 00:50:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:47.590 00:50:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:47.590 00:50:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:47.590 00:50:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:47.590 00:50:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:47.590 00:50:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:47.590 00:50:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:47.590 00:50:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:47.590 00:50:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:47.590 00:50:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.590 00:50:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.590 00:50:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:50.127 00:12:50.127 real 0m7.316s 00:12:50.127 user 0m9.878s 00:12:50.127 sys 0m2.092s 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.127 ************************************ 00:12:50.127 END TEST nvmf_target_discovery 00:12:50.127 ************************************ 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:50.127 ************************************ 00:12:50.127 START TEST nvmf_referrals 00:12:50.127 ************************************ 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:50.127 * Looking for test storage... 00:12:50.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:50.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.127 --rc genhtml_branch_coverage=1 00:12:50.127 --rc genhtml_function_coverage=1 00:12:50.127 --rc genhtml_legend=1 00:12:50.127 --rc geninfo_all_blocks=1 00:12:50.127 --rc geninfo_unexecuted_blocks=1 00:12:50.127 00:12:50.127 ' 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:50.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.127 --rc genhtml_branch_coverage=1 00:12:50.127 --rc genhtml_function_coverage=1 00:12:50.127 --rc genhtml_legend=1 00:12:50.127 --rc geninfo_all_blocks=1 00:12:50.127 --rc geninfo_unexecuted_blocks=1 00:12:50.127 00:12:50.127 ' 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:50.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.127 --rc genhtml_branch_coverage=1 00:12:50.127 --rc genhtml_function_coverage=1 00:12:50.127 --rc genhtml_legend=1 00:12:50.127 --rc geninfo_all_blocks=1 00:12:50.127 --rc geninfo_unexecuted_blocks=1 00:12:50.127 00:12:50.127 ' 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:50.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.127 --rc genhtml_branch_coverage=1 00:12:50.127 --rc genhtml_function_coverage=1 00:12:50.127 --rc genhtml_legend=1 00:12:50.127 --rc geninfo_all_blocks=1 00:12:50.127 --rc geninfo_unexecuted_blocks=1 00:12:50.127 00:12:50.127 ' 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:50.127 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:50.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:50.128 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:52.031 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:52.031 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:52.031 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:52.031 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.031 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:52.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:12:52.290 00:12:52.290 --- 10.0.0.2 ping statistics --- 00:12:52.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.290 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:12:52.290 00:12:52.290 --- 10.0.0.1 ping statistics --- 00:12:52.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.290 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2904202 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2904202 00:12:52.290 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2904202 ']' 00:12:52.291 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.291 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.291 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.291 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.291 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.291 [2024-12-06 00:50:24.950389] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:12:52.291 [2024-12-06 00:50:24.950553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.552 [2024-12-06 00:50:25.116280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.809 [2024-12-06 00:50:25.260408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.809 [2024-12-06 00:50:25.260480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.809 [2024-12-06 00:50:25.260506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.809 [2024-12-06 00:50:25.260531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.809 [2024-12-06 00:50:25.260552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.809 [2024-12-06 00:50:25.263303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.809 [2024-12-06 00:50:25.263373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.809 [2024-12-06 00:50:25.263483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.809 [2024-12-06 00:50:25.263488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.373 00:50:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.373 00:50:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:53.373 00:50:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:53.373 00:50:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:53.373 00:50:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.373 [2024-12-06 00:50:26.009849] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.373 [2024-12-06 00:50:26.030248] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.373 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.629 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.630 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.887 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:53.887 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:53.887 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:53.887 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:53.887 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:53.887 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:53.887 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:53.887 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:53.887 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:53.887 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:53.887 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.887 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.887 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.887 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:53.887 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.887 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.144 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.144 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:54.144 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:54.144 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:54.144 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:54.144 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.144 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.145 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:54.145 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.145 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:54.145 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:54.145 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:54.145 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:54.145 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:54.145 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.145 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:54.145 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:54.145 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:54.145 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:54.145 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:54.145 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:54.145 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:54.145 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.145 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:54.403 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:54.403 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:54.403 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:54.403 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:54.403 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.403 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:54.661 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:54.661 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:54.661 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.661 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.661 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.661 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:54.661 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:54.661 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:54.661 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.661 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:54.661 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.661 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:54.661 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.661 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:54.661 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:54.661 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:54.661 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:54.661 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:54.661 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.661 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:54.661 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:54.919 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:54.919 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:54.919 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:54.919 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:54.919 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:54.919 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.919 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:54.919 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:54.919 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:54.919 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:54.919 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:54.919 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.919 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:55.177 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:55.177 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:55.177 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.177 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.177 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.177 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:55.177 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:55.177 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.177 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.177 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.177 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:55.177 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:55.177 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:55.177 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:55.177 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:55.177 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:55.177 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:55.436 rmmod nvme_tcp 00:12:55.436 rmmod nvme_fabrics 00:12:55.436 rmmod nvme_keyring 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2904202 ']' 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2904202 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2904202 ']' 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2904202 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2904202 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2904202' 00:12:55.436 killing process with pid 2904202 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2904202 00:12:55.436 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2904202 00:12:56.812 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:56.812 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:56.812 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:56.812 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:56.812 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:56.812 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:56.812 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:56.812 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:56.812 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:56.813 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.813 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.813 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.762 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:58.762 00:12:58.762 real 0m8.903s 00:12:58.762 user 0m16.560s 00:12:58.762 sys 0m2.506s 00:12:58.762 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.762 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:58.762 ************************************ 00:12:58.762 END TEST nvmf_referrals 00:12:58.762 ************************************ 00:12:58.762 00:50:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:58.762 00:50:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:58.762 00:50:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.762 00:50:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:58.762 ************************************ 00:12:58.762 START TEST nvmf_connect_disconnect 00:12:58.762 ************************************ 00:12:58.762 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:58.762 * Looking for test storage... 00:12:58.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.762 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:58.762 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:58.762 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:59.073 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:59.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.074 --rc genhtml_branch_coverage=1 00:12:59.074 --rc genhtml_function_coverage=1 00:12:59.074 --rc genhtml_legend=1 00:12:59.074 --rc geninfo_all_blocks=1 00:12:59.074 --rc geninfo_unexecuted_blocks=1 00:12:59.074 00:12:59.074 ' 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:59.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.074 --rc genhtml_branch_coverage=1 00:12:59.074 --rc genhtml_function_coverage=1 00:12:59.074 --rc genhtml_legend=1 00:12:59.074 --rc geninfo_all_blocks=1 00:12:59.074 --rc geninfo_unexecuted_blocks=1 00:12:59.074 00:12:59.074 ' 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:59.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.074 --rc genhtml_branch_coverage=1 00:12:59.074 --rc genhtml_function_coverage=1 00:12:59.074 --rc genhtml_legend=1 00:12:59.074 --rc geninfo_all_blocks=1 00:12:59.074 --rc geninfo_unexecuted_blocks=1 00:12:59.074 00:12:59.074 ' 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:59.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.074 --rc genhtml_branch_coverage=1 00:12:59.074 --rc genhtml_function_coverage=1 00:12:59.074 --rc genhtml_legend=1 00:12:59.074 --rc geninfo_all_blocks=1 00:12:59.074 --rc geninfo_unexecuted_blocks=1 00:12:59.074 00:12:59.074 ' 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:59.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:59.074 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:00.973 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:00.973 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:00.973 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:00.974 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:00.974 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:00.974 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:01.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:01.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:13:01.232 00:13:01.232 --- 10.0.0.2 ping statistics --- 00:13:01.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.232 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:01.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:01.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:13:01.232 00:13:01.232 --- 10.0.0.1 ping statistics --- 00:13:01.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.232 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2906730 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2906730 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2906730 ']' 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:01.232 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:01.490 [2024-12-06 00:50:33.942604] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:13:01.490 [2024-12-06 00:50:33.942738] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.490 [2024-12-06 00:50:34.086639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:01.748 [2024-12-06 00:50:34.229962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.748 [2024-12-06 00:50:34.230061] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.748 [2024-12-06 00:50:34.230087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.748 [2024-12-06 00:50:34.230112] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.748 [2024-12-06 00:50:34.230133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.748 [2024-12-06 00:50:34.233310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.748 [2024-12-06 00:50:34.233391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.748 [2024-12-06 00:50:34.233446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.748 [2024-12-06 00:50:34.233454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.313 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.313 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:02.313 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:02.313 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:02.314 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:02.314 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.314 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:02.314 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.314 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:02.314 [2024-12-06 00:50:34.919720] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.314 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.314 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:02.314 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.314 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:02.314 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.314 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:02.314 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:02.314 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.314 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:02.593 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.593 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:02.593 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.593 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:02.593 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.593 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.593 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.593 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:02.593 [2024-12-06 00:50:35.043506] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.593 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.594 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:02.594 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:02.594 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:02.594 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:05.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:58.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:58.140 rmmod nvme_tcp 00:16:58.140 rmmod nvme_fabrics 00:16:58.140 rmmod nvme_keyring 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2906730 ']' 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2906730 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2906730 ']' 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2906730 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2906730 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2906730' 00:16:58.140 killing process with pid 2906730 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2906730 00:16:58.140 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2906730 00:16:59.075 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:59.075 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:59.075 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:59.075 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:59.075 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:59.075 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:59.075 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:59.075 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:59.075 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:59.075 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.075 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.075 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.612 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:01.612 00:17:01.612 real 4m2.469s 00:17:01.612 user 15m14.464s 00:17:01.612 sys 0m41.415s 00:17:01.612 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.612 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:01.612 ************************************ 00:17:01.612 END TEST nvmf_connect_disconnect 00:17:01.612 ************************************ 00:17:01.612 00:54:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:01.612 00:54:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:01.612 00:54:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.612 00:54:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:01.612 ************************************ 00:17:01.612 START TEST nvmf_multitarget 00:17:01.612 ************************************ 00:17:01.612 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:01.612 * Looking for test storage... 00:17:01.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:01.612 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:01.612 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:17:01.612 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:01.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.612 --rc genhtml_branch_coverage=1 00:17:01.612 --rc genhtml_function_coverage=1 00:17:01.612 --rc genhtml_legend=1 00:17:01.612 --rc geninfo_all_blocks=1 00:17:01.612 --rc geninfo_unexecuted_blocks=1 00:17:01.612 00:17:01.612 ' 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:01.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.612 --rc genhtml_branch_coverage=1 00:17:01.612 --rc genhtml_function_coverage=1 00:17:01.612 --rc genhtml_legend=1 00:17:01.612 --rc geninfo_all_blocks=1 00:17:01.612 --rc geninfo_unexecuted_blocks=1 00:17:01.612 00:17:01.612 ' 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:01.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.612 --rc genhtml_branch_coverage=1 00:17:01.612 --rc genhtml_function_coverage=1 00:17:01.612 --rc genhtml_legend=1 00:17:01.612 --rc geninfo_all_blocks=1 00:17:01.612 --rc geninfo_unexecuted_blocks=1 00:17:01.612 00:17:01.612 ' 00:17:01.612 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:01.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.612 --rc genhtml_branch_coverage=1 00:17:01.612 --rc genhtml_function_coverage=1 00:17:01.612 --rc genhtml_legend=1 00:17:01.613 --rc geninfo_all_blocks=1 00:17:01.613 --rc geninfo_unexecuted_blocks=1 00:17:01.613 00:17:01.613 ' 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:01.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:01.613 00:54:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:03.511 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:03.511 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:03.511 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.511 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:03.511 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:03.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:17:03.512 00:17:03.512 --- 10.0.0.2 ping statistics --- 00:17:03.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.512 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:03.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:17:03.512 00:17:03.512 --- 10.0.0.1 ping statistics --- 00:17:03.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.512 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2939083 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2939083 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2939083 ']' 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.512 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:03.769 [2024-12-06 00:54:36.283646] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:17:03.769 [2024-12-06 00:54:36.283792] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.769 [2024-12-06 00:54:36.438055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:04.025 [2024-12-06 00:54:36.583633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.025 [2024-12-06 00:54:36.583734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.025 [2024-12-06 00:54:36.583760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.025 [2024-12-06 00:54:36.583784] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.025 [2024-12-06 00:54:36.583805] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.025 [2024-12-06 00:54:36.586754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.025 [2024-12-06 00:54:36.586790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.025 [2024-12-06 00:54:36.586850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.025 [2024-12-06 00:54:36.586855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:04.590 00:54:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.590 00:54:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:17:04.590 00:54:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:04.590 00:54:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:04.590 00:54:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:04.590 00:54:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.590 00:54:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:04.590 00:54:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:04.590 00:54:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:04.847 00:54:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:04.847 00:54:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:04.847 "nvmf_tgt_1" 00:17:04.847 00:54:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:05.104 "nvmf_tgt_2" 00:17:05.104 00:54:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:05.104 00:54:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:05.104 00:54:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:05.104 00:54:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:05.361 true 00:17:05.361 00:54:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:05.361 true 00:17:05.361 00:54:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:05.361 00:54:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:05.361 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:05.361 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:05.361 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:05.361 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:05.361 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:05.361 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:05.361 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:05.361 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:05.361 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:05.361 rmmod nvme_tcp 00:17:05.618 rmmod nvme_fabrics 00:17:05.618 rmmod nvme_keyring 00:17:05.618 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:05.618 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:05.618 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:05.618 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2939083 ']' 00:17:05.618 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2939083 00:17:05.618 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2939083 ']' 00:17:05.618 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2939083 00:17:05.618 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:17:05.618 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.618 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2939083 00:17:05.618 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.618 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.618 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2939083' 00:17:05.618 killing process with pid 2939083 00:17:05.618 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2939083 00:17:05.618 00:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2939083 00:17:06.552 00:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:06.552 00:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:06.552 00:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:06.552 00:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:06.552 00:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:17:06.552 00:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:06.552 00:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:17:06.552 00:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:06.552 00:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:06.552 00:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.552 00:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.552 00:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.088 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:09.088 00:17:09.088 real 0m7.432s 00:17:09.088 user 0m11.773s 00:17:09.088 sys 0m2.135s 00:17:09.088 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.088 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:09.088 ************************************ 00:17:09.088 END TEST nvmf_multitarget 00:17:09.088 ************************************ 00:17:09.088 00:54:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:09.088 00:54:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:09.088 00:54:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.088 00:54:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:09.088 ************************************ 00:17:09.088 START TEST nvmf_rpc 00:17:09.088 ************************************ 00:17:09.088 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:09.088 * Looking for test storage... 00:17:09.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:09.088 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:09.088 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:09.088 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:09.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.089 --rc genhtml_branch_coverage=1 00:17:09.089 --rc genhtml_function_coverage=1 00:17:09.089 --rc genhtml_legend=1 00:17:09.089 --rc geninfo_all_blocks=1 00:17:09.089 --rc geninfo_unexecuted_blocks=1 00:17:09.089 00:17:09.089 ' 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:09.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.089 --rc genhtml_branch_coverage=1 00:17:09.089 --rc genhtml_function_coverage=1 00:17:09.089 --rc genhtml_legend=1 00:17:09.089 --rc geninfo_all_blocks=1 00:17:09.089 --rc geninfo_unexecuted_blocks=1 00:17:09.089 00:17:09.089 ' 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:09.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.089 --rc genhtml_branch_coverage=1 00:17:09.089 --rc genhtml_function_coverage=1 00:17:09.089 --rc genhtml_legend=1 00:17:09.089 --rc geninfo_all_blocks=1 00:17:09.089 --rc geninfo_unexecuted_blocks=1 00:17:09.089 00:17:09.089 ' 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:09.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.089 --rc genhtml_branch_coverage=1 00:17:09.089 --rc genhtml_function_coverage=1 00:17:09.089 --rc genhtml_legend=1 00:17:09.089 --rc geninfo_all_blocks=1 00:17:09.089 --rc geninfo_unexecuted_blocks=1 00:17:09.089 00:17:09.089 ' 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:09.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.089 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:09.090 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.090 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:09.090 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:09.090 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:09.090 00:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:10.992 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:10.992 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:10.993 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:10.993 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:10.993 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:10.993 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:11.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:17:11.252 00:17:11.252 --- 10.0.0.2 ping statistics --- 00:17:11.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.252 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:11.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:17:11.252 00:17:11.252 --- 10.0.0.1 ping statistics --- 00:17:11.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.252 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2941441 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2941441 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2941441 ']' 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.252 00:54:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.252 [2024-12-06 00:54:43.841597] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:17:11.252 [2024-12-06 00:54:43.841726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.511 [2024-12-06 00:54:43.994776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:11.511 [2024-12-06 00:54:44.141753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.511 [2024-12-06 00:54:44.141849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.511 [2024-12-06 00:54:44.141884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.511 [2024-12-06 00:54:44.141909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.511 [2024-12-06 00:54:44.141929] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.511 [2024-12-06 00:54:44.144769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.511 [2024-12-06 00:54:44.144835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.511 [2024-12-06 00:54:44.144900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.511 [2024-12-06 00:54:44.144905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:12.446 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.446 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:12.446 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:12.446 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:12.446 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.446 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.446 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:12.446 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.446 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.446 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.446 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:12.446 "tick_rate": 2700000000, 00:17:12.446 "poll_groups": [ 00:17:12.446 { 00:17:12.446 "name": "nvmf_tgt_poll_group_000", 00:17:12.446 "admin_qpairs": 0, 00:17:12.446 "io_qpairs": 0, 00:17:12.446 "current_admin_qpairs": 0, 00:17:12.446 "current_io_qpairs": 0, 00:17:12.446 "pending_bdev_io": 0, 00:17:12.446 "completed_nvme_io": 0, 00:17:12.446 "transports": [] 00:17:12.446 }, 00:17:12.446 { 00:17:12.446 "name": "nvmf_tgt_poll_group_001", 00:17:12.446 "admin_qpairs": 0, 00:17:12.446 "io_qpairs": 0, 00:17:12.446 "current_admin_qpairs": 0, 00:17:12.446 "current_io_qpairs": 0, 00:17:12.446 "pending_bdev_io": 0, 00:17:12.446 "completed_nvme_io": 0, 00:17:12.446 "transports": [] 00:17:12.446 }, 00:17:12.446 { 00:17:12.446 "name": "nvmf_tgt_poll_group_002", 00:17:12.446 "admin_qpairs": 0, 00:17:12.446 "io_qpairs": 0, 00:17:12.446 "current_admin_qpairs": 0, 00:17:12.446 "current_io_qpairs": 0, 00:17:12.446 "pending_bdev_io": 0, 00:17:12.446 "completed_nvme_io": 0, 00:17:12.446 "transports": [] 00:17:12.446 }, 00:17:12.446 { 00:17:12.446 "name": "nvmf_tgt_poll_group_003", 00:17:12.446 "admin_qpairs": 0, 00:17:12.446 "io_qpairs": 0, 00:17:12.446 "current_admin_qpairs": 0, 00:17:12.446 "current_io_qpairs": 0, 00:17:12.446 "pending_bdev_io": 0, 00:17:12.446 "completed_nvme_io": 0, 00:17:12.446 "transports": [] 00:17:12.446 } 00:17:12.446 ] 00:17:12.446 }' 00:17:12.446 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:12.446 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:12.446 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:12.446 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:12.446 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:12.446 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:12.446 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:12.446 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:12.446 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.446 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.447 [2024-12-06 00:54:44.968323] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:12.447 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.447 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:12.447 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.447 00:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.447 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.447 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:12.447 "tick_rate": 2700000000, 00:17:12.447 "poll_groups": [ 00:17:12.447 { 00:17:12.447 "name": "nvmf_tgt_poll_group_000", 00:17:12.447 "admin_qpairs": 0, 00:17:12.447 "io_qpairs": 0, 00:17:12.447 "current_admin_qpairs": 0, 00:17:12.447 "current_io_qpairs": 0, 00:17:12.447 "pending_bdev_io": 0, 00:17:12.447 "completed_nvme_io": 0, 00:17:12.447 "transports": [ 00:17:12.447 { 00:17:12.447 "trtype": "TCP" 00:17:12.447 } 00:17:12.447 ] 00:17:12.447 }, 00:17:12.447 { 00:17:12.447 "name": "nvmf_tgt_poll_group_001", 00:17:12.447 "admin_qpairs": 0, 00:17:12.447 "io_qpairs": 0, 00:17:12.447 "current_admin_qpairs": 0, 00:17:12.447 "current_io_qpairs": 0, 00:17:12.447 "pending_bdev_io": 0, 00:17:12.447 "completed_nvme_io": 0, 00:17:12.447 "transports": [ 00:17:12.447 { 00:17:12.447 "trtype": "TCP" 00:17:12.447 } 00:17:12.447 ] 00:17:12.447 }, 00:17:12.447 { 00:17:12.447 "name": "nvmf_tgt_poll_group_002", 00:17:12.447 "admin_qpairs": 0, 00:17:12.447 "io_qpairs": 0, 00:17:12.447 "current_admin_qpairs": 0, 00:17:12.447 "current_io_qpairs": 0, 00:17:12.447 "pending_bdev_io": 0, 00:17:12.447 "completed_nvme_io": 0, 00:17:12.447 "transports": [ 00:17:12.447 { 00:17:12.447 "trtype": "TCP" 00:17:12.447 } 00:17:12.447 ] 00:17:12.447 }, 00:17:12.447 { 00:17:12.447 "name": "nvmf_tgt_poll_group_003", 00:17:12.447 "admin_qpairs": 0, 00:17:12.447 "io_qpairs": 0, 00:17:12.447 "current_admin_qpairs": 0, 00:17:12.447 "current_io_qpairs": 0, 00:17:12.447 "pending_bdev_io": 0, 00:17:12.447 "completed_nvme_io": 0, 00:17:12.447 "transports": [ 00:17:12.447 { 00:17:12.447 "trtype": "TCP" 00:17:12.447 } 00:17:12.447 ] 00:17:12.447 } 00:17:12.447 ] 00:17:12.447 }' 00:17:12.447 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:12.447 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:12.447 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:12.447 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:12.447 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:12.447 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:12.447 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:12.447 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:12.447 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:12.447 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:12.447 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:12.447 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:12.447 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:12.447 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:12.447 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.447 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.704 Malloc1 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.704 [2024-12-06 00:54:45.189294] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:12.704 [2024-12-06 00:54:45.212588] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:17:12.704 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:12.704 could not add new controller: failed to write to nvme-fabrics device 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.704 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:13.268 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:13.268 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:13.268 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:13.268 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:13.268 00:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:15.163 00:54:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:15.163 00:54:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:15.163 00:54:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:15.422 00:54:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:15.422 00:54:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:15.422 00:54:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:15.422 00:54:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:15.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:15.422 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:15.681 [2024-12-06 00:54:48.135937] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:17:15.681 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:15.681 could not add new controller: failed to write to nvme-fabrics device 00:17:15.681 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:15.681 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:15.681 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:15.681 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:15.681 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:15.681 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.681 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.681 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.681 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:16.247 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:16.247 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:16.247 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:16.247 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:16.247 00:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:18.148 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:18.148 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:18.148 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:18.148 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:18.148 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:18.148 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:18.148 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:18.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.407 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:18.407 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:18.407 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:18.407 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.407 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:18.407 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.407 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:18.407 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.407 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.407 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.407 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.407 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:18.407 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:18.407 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:18.407 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.407 00:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.407 00:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.407 00:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.407 00:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.407 00:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.407 [2024-12-06 00:54:51.005895] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.407 00:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.407 00:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:18.407 00:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.407 00:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.407 00:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.407 00:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:18.407 00:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.407 00:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.407 00:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.407 00:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:18.974 00:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:18.974 00:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:18.974 00:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:18.974 00:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:18.974 00:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:21.501 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:21.501 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:21.501 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:21.501 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:21.501 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:21.501 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:21.501 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:21.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.502 [2024-12-06 00:54:53.811231] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.502 00:54:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:21.760 00:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:21.760 00:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:21.760 00:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:21.760 00:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:21.760 00:54:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:24.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.286 [2024-12-06 00:54:56.644013] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.286 00:54:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:24.853 00:54:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:24.853 00:54:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:24.853 00:54:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:24.853 00:54:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:24.853 00:54:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:26.752 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:26.752 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:26.752 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:26.752 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:26.752 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:26.752 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:26.752 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:27.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.010 [2024-12-06 00:54:59.559911] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.010 00:54:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:27.574 00:55:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:27.574 00:55:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:27.574 00:55:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:27.574 00:55:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:27.574 00:55:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:30.093 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:30.093 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:30.093 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:30.093 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:30.093 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:30.093 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:30.093 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:30.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.093 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:30.093 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:30.093 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:30.093 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:30.093 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:30.093 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:30.093 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.094 [2024-12-06 00:55:02.379266] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.094 00:55:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:30.674 00:55:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:30.674 00:55:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:30.674 00:55:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:30.674 00:55:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:30.674 00:55:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:32.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.599 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 [2024-12-06 00:55:05.316300] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 [2024-12-06 00:55:05.364353] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 [2024-12-06 00:55:05.412488] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 [2024-12-06 00:55:05.460624] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 [2024-12-06 00:55:05.508824] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.857 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:32.857 "tick_rate": 2700000000, 00:17:32.857 "poll_groups": [ 00:17:32.858 { 00:17:32.858 "name": "nvmf_tgt_poll_group_000", 00:17:32.858 "admin_qpairs": 2, 00:17:32.858 "io_qpairs": 84, 00:17:32.858 "current_admin_qpairs": 0, 00:17:32.858 "current_io_qpairs": 0, 00:17:32.858 "pending_bdev_io": 0, 00:17:32.858 "completed_nvme_io": 100, 00:17:32.858 "transports": [ 00:17:32.858 { 00:17:32.858 "trtype": "TCP" 00:17:32.858 } 00:17:32.858 ] 00:17:32.858 }, 00:17:32.858 { 00:17:32.858 "name": "nvmf_tgt_poll_group_001", 00:17:32.858 "admin_qpairs": 2, 00:17:32.858 "io_qpairs": 84, 00:17:32.858 "current_admin_qpairs": 0, 00:17:32.858 "current_io_qpairs": 0, 00:17:32.858 "pending_bdev_io": 0, 00:17:32.858 "completed_nvme_io": 226, 00:17:32.858 "transports": [ 00:17:32.858 { 00:17:32.858 "trtype": "TCP" 00:17:32.858 } 00:17:32.858 ] 00:17:32.858 }, 00:17:32.858 { 00:17:32.858 "name": "nvmf_tgt_poll_group_002", 00:17:32.858 "admin_qpairs": 1, 00:17:32.858 "io_qpairs": 84, 00:17:32.858 "current_admin_qpairs": 0, 00:17:32.858 "current_io_qpairs": 0, 00:17:32.858 "pending_bdev_io": 0, 00:17:32.858 "completed_nvme_io": 183, 00:17:32.858 "transports": [ 00:17:32.858 { 00:17:32.858 "trtype": "TCP" 00:17:32.858 } 00:17:32.858 ] 00:17:32.858 }, 00:17:32.858 { 00:17:32.858 "name": "nvmf_tgt_poll_group_003", 00:17:32.858 "admin_qpairs": 2, 00:17:32.858 "io_qpairs": 84, 00:17:32.858 "current_admin_qpairs": 0, 00:17:32.858 "current_io_qpairs": 0, 00:17:32.858 "pending_bdev_io": 0, 00:17:32.858 "completed_nvme_io": 177, 00:17:32.858 "transports": [ 00:17:32.858 { 00:17:32.858 "trtype": "TCP" 00:17:32.858 } 00:17:32.858 ] 00:17:32.858 } 00:17:32.858 ] 00:17:32.858 }' 00:17:32.858 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:32.858 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:32.858 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:32.858 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:33.115 rmmod nvme_tcp 00:17:33.115 rmmod nvme_fabrics 00:17:33.115 rmmod nvme_keyring 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2941441 ']' 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2941441 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2941441 ']' 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2941441 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2941441 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2941441' 00:17:33.115 killing process with pid 2941441 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2941441 00:17:33.115 00:55:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2941441 00:17:34.490 00:55:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:34.490 00:55:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:34.490 00:55:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:34.490 00:55:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:34.490 00:55:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:34.490 00:55:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:34.490 00:55:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:34.490 00:55:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:34.490 00:55:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:34.490 00:55:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.490 00:55:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.490 00:55:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.399 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:36.399 00:17:36.399 real 0m27.756s 00:17:36.399 user 1m29.264s 00:17:36.399 sys 0m4.510s 00:17:36.399 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.399 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.399 ************************************ 00:17:36.399 END TEST nvmf_rpc 00:17:36.399 ************************************ 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:36.658 ************************************ 00:17:36.658 START TEST nvmf_invalid 00:17:36.658 ************************************ 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:36.658 * Looking for test storage... 00:17:36.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:36.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.658 --rc genhtml_branch_coverage=1 00:17:36.658 --rc genhtml_function_coverage=1 00:17:36.658 --rc genhtml_legend=1 00:17:36.658 --rc geninfo_all_blocks=1 00:17:36.658 --rc geninfo_unexecuted_blocks=1 00:17:36.658 00:17:36.658 ' 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:36.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.658 --rc genhtml_branch_coverage=1 00:17:36.658 --rc genhtml_function_coverage=1 00:17:36.658 --rc genhtml_legend=1 00:17:36.658 --rc geninfo_all_blocks=1 00:17:36.658 --rc geninfo_unexecuted_blocks=1 00:17:36.658 00:17:36.658 ' 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:36.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.658 --rc genhtml_branch_coverage=1 00:17:36.658 --rc genhtml_function_coverage=1 00:17:36.658 --rc genhtml_legend=1 00:17:36.658 --rc geninfo_all_blocks=1 00:17:36.658 --rc geninfo_unexecuted_blocks=1 00:17:36.658 00:17:36.658 ' 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:36.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.658 --rc genhtml_branch_coverage=1 00:17:36.658 --rc genhtml_function_coverage=1 00:17:36.658 --rc genhtml_legend=1 00:17:36.658 --rc geninfo_all_blocks=1 00:17:36.658 --rc geninfo_unexecuted_blocks=1 00:17:36.658 00:17:36.658 ' 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.658 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:36.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:36.659 00:55:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:38.565 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:38.565 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:38.565 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:38.566 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:38.566 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:38.566 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:38.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:17:38.826 00:17:38.826 --- 10.0.0.2 ping statistics --- 00:17:38.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.826 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:38.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:17:38.826 00:17:38.826 --- 10.0.0.1 ping statistics --- 00:17:38.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.826 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2946206 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2946206 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2946206 ']' 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.826 00:55:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:38.826 [2024-12-06 00:55:11.514937] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:17:38.826 [2024-12-06 00:55:11.515092] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.084 [2024-12-06 00:55:11.669603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:39.342 [2024-12-06 00:55:11.815868] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.342 [2024-12-06 00:55:11.815952] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.342 [2024-12-06 00:55:11.815978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.343 [2024-12-06 00:55:11.816003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.343 [2024-12-06 00:55:11.816023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.343 [2024-12-06 00:55:11.818885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.343 [2024-12-06 00:55:11.818958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.343 [2024-12-06 00:55:11.819030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.343 [2024-12-06 00:55:11.819034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:39.909 00:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.909 00:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:39.909 00:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:39.909 00:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:39.909 00:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:39.909 00:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.909 00:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:39.909 00:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28868 00:17:40.168 [2024-12-06 00:55:12.743050] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:40.168 00:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:40.168 { 00:17:40.168 "nqn": "nqn.2016-06.io.spdk:cnode28868", 00:17:40.168 "tgt_name": "foobar", 00:17:40.168 "method": "nvmf_create_subsystem", 00:17:40.168 "req_id": 1 00:17:40.168 } 00:17:40.168 Got JSON-RPC error response 00:17:40.168 response: 00:17:40.168 { 00:17:40.168 "code": -32603, 00:17:40.168 "message": "Unable to find target foobar" 00:17:40.168 }' 00:17:40.168 00:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:40.168 { 00:17:40.168 "nqn": "nqn.2016-06.io.spdk:cnode28868", 00:17:40.168 "tgt_name": "foobar", 00:17:40.168 "method": "nvmf_create_subsystem", 00:17:40.168 "req_id": 1 00:17:40.168 } 00:17:40.168 Got JSON-RPC error response 00:17:40.168 response: 00:17:40.168 { 00:17:40.168 "code": -32603, 00:17:40.168 "message": "Unable to find target foobar" 00:17:40.168 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:40.168 00:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:40.168 00:55:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode16240 00:17:40.426 [2024-12-06 00:55:13.028117] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16240: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:40.426 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:40.426 { 00:17:40.426 "nqn": "nqn.2016-06.io.spdk:cnode16240", 00:17:40.426 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:40.426 "method": "nvmf_create_subsystem", 00:17:40.426 "req_id": 1 00:17:40.426 } 00:17:40.426 Got JSON-RPC error response 00:17:40.426 response: 00:17:40.426 { 00:17:40.426 "code": -32602, 00:17:40.426 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:40.426 }' 00:17:40.426 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:40.426 { 00:17:40.426 "nqn": "nqn.2016-06.io.spdk:cnode16240", 00:17:40.426 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:40.426 "method": "nvmf_create_subsystem", 00:17:40.426 "req_id": 1 00:17:40.426 } 00:17:40.426 Got JSON-RPC error response 00:17:40.426 response: 00:17:40.426 { 00:17:40.426 "code": -32602, 00:17:40.426 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:40.426 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:40.426 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:40.427 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14954 00:17:40.685 [2024-12-06 00:55:13.296984] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14954: invalid model number 'SPDK_Controller' 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:40.685 { 00:17:40.685 "nqn": "nqn.2016-06.io.spdk:cnode14954", 00:17:40.685 "model_number": "SPDK_Controller\u001f", 00:17:40.685 "method": "nvmf_create_subsystem", 00:17:40.685 "req_id": 1 00:17:40.685 } 00:17:40.685 Got JSON-RPC error response 00:17:40.685 response: 00:17:40.685 { 00:17:40.685 "code": -32602, 00:17:40.685 "message": "Invalid MN SPDK_Controller\u001f" 00:17:40.685 }' 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:40.685 { 00:17:40.685 "nqn": "nqn.2016-06.io.spdk:cnode14954", 00:17:40.685 "model_number": "SPDK_Controller\u001f", 00:17:40.685 "method": "nvmf_create_subsystem", 00:17:40.685 "req_id": 1 00:17:40.685 } 00:17:40.685 Got JSON-RPC error response 00:17:40.685 response: 00:17:40.685 { 00:17:40.685 "code": -32602, 00:17:40.685 "message": "Invalid MN SPDK_Controller\u001f" 00:17:40.685 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:40.685 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:40.686 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:40.945 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.945 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.945 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:40.945 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:40.945 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:40.945 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.945 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.945 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:40.945 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:40.945 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:40.945 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.945 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.945 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ? == \- ]] 00:17:40.945 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '?9@Zes>d'\''F@a[NaJ8e;%' 00:17:40.945 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '?9@Zes>d'\''F@a[NaJ8e;%' nqn.2016-06.io.spdk:cnode26891 00:17:41.204 [2024-12-06 00:55:13.658159] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26891: invalid serial number '?9@Zes>d'F@a[NaJ8e;%' 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:41.204 { 00:17:41.204 "nqn": "nqn.2016-06.io.spdk:cnode26891", 00:17:41.204 "serial_number": "?9@Zes>d'\''\u007fF@a[NaJ8e;%", 00:17:41.204 "method": "nvmf_create_subsystem", 00:17:41.204 "req_id": 1 00:17:41.204 } 00:17:41.204 Got JSON-RPC error response 00:17:41.204 response: 00:17:41.204 { 00:17:41.204 "code": -32602, 00:17:41.204 "message": "Invalid SN ?9@Zes>d'\''\u007fF@a[NaJ8e;%" 00:17:41.204 }' 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:41.204 { 00:17:41.204 "nqn": "nqn.2016-06.io.spdk:cnode26891", 00:17:41.204 "serial_number": "?9@Zes>d'\u007fF@a[NaJ8e;%", 00:17:41.204 "method": "nvmf_create_subsystem", 00:17:41.204 "req_id": 1 00:17:41.204 } 00:17:41.204 Got JSON-RPC error response 00:17:41.204 response: 00:17:41.204 { 00:17:41.204 "code": -32602, 00:17:41.204 "message": "Invalid SN ?9@Zes>d'\u007fF@a[NaJ8e;%" 00:17:41.204 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:41.204 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:41.205 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ @ == \- ]] 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '@G\m"\m^2e8Ctq[xJ3B:|ARTeDfEF:iveo}:hI3D' 00:17:41.206 00:55:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '@G\m"\m^2e8Ctq[xJ3B:|ARTeDfEF:iveo}:hI3D' nqn.2016-06.io.spdk:cnode17975 00:17:41.464 [2024-12-06 00:55:14.083558] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17975: invalid model number '@G\m"\m^2e8Ctq[xJ3B:|ARTeDfEF:iveo}:hI3D' 00:17:41.464 00:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:41.464 { 00:17:41.464 "nqn": "nqn.2016-06.io.spdk:cnode17975", 00:17:41.464 "model_number": "@G\\m\"\\m^2e8Ctq[xJ3B:|ARTeDf\u007fEF:iveo}:hI3D", 00:17:41.464 "method": "nvmf_create_subsystem", 00:17:41.464 "req_id": 1 00:17:41.464 } 00:17:41.464 Got JSON-RPC error response 00:17:41.464 response: 00:17:41.464 { 00:17:41.464 "code": -32602, 00:17:41.464 "message": "Invalid MN @G\\m\"\\m^2e8Ctq[xJ3B:|ARTeDf\u007fEF:iveo}:hI3D" 00:17:41.464 }' 00:17:41.464 00:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:41.464 { 00:17:41.464 "nqn": "nqn.2016-06.io.spdk:cnode17975", 00:17:41.464 "model_number": "@G\\m\"\\m^2e8Ctq[xJ3B:|ARTeDf\u007fEF:iveo}:hI3D", 00:17:41.464 "method": "nvmf_create_subsystem", 00:17:41.464 "req_id": 1 00:17:41.464 } 00:17:41.464 Got JSON-RPC error response 00:17:41.464 response: 00:17:41.464 { 00:17:41.464 "code": -32602, 00:17:41.464 "message": "Invalid MN @G\\m\"\\m^2e8Ctq[xJ3B:|ARTeDf\u007fEF:iveo}:hI3D" 00:17:41.464 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:41.464 00:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:41.722 [2024-12-06 00:55:14.352618] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.722 00:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:41.979 00:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:41.979 00:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:41.979 00:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:41.979 00:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:41.979 00:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:42.238 [2024-12-06 00:55:14.914380] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:42.238 00:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:42.238 { 00:17:42.238 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:42.238 "listen_address": { 00:17:42.238 "trtype": "tcp", 00:17:42.238 "traddr": "", 00:17:42.238 "trsvcid": "4421" 00:17:42.238 }, 00:17:42.238 "method": "nvmf_subsystem_remove_listener", 00:17:42.238 "req_id": 1 00:17:42.238 } 00:17:42.238 Got JSON-RPC error response 00:17:42.238 response: 00:17:42.238 { 00:17:42.238 "code": -32602, 00:17:42.238 "message": "Invalid parameters" 00:17:42.238 }' 00:17:42.238 00:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:42.238 { 00:17:42.238 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:42.238 "listen_address": { 00:17:42.238 "trtype": "tcp", 00:17:42.238 "traddr": "", 00:17:42.238 "trsvcid": "4421" 00:17:42.238 }, 00:17:42.238 "method": "nvmf_subsystem_remove_listener", 00:17:42.238 "req_id": 1 00:17:42.238 } 00:17:42.238 Got JSON-RPC error response 00:17:42.238 response: 00:17:42.238 { 00:17:42.238 "code": -32602, 00:17:42.238 "message": "Invalid parameters" 00:17:42.238 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:42.238 00:55:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16337 -i 0 00:17:42.496 [2024-12-06 00:55:15.199291] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16337: invalid cntlid range [0-65519] 00:17:42.752 00:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:42.752 { 00:17:42.752 "nqn": "nqn.2016-06.io.spdk:cnode16337", 00:17:42.752 "min_cntlid": 0, 00:17:42.752 "method": "nvmf_create_subsystem", 00:17:42.752 "req_id": 1 00:17:42.752 } 00:17:42.752 Got JSON-RPC error response 00:17:42.752 response: 00:17:42.752 { 00:17:42.752 "code": -32602, 00:17:42.752 "message": "Invalid cntlid range [0-65519]" 00:17:42.752 }' 00:17:42.752 00:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:42.752 { 00:17:42.753 "nqn": "nqn.2016-06.io.spdk:cnode16337", 00:17:42.753 "min_cntlid": 0, 00:17:42.753 "method": "nvmf_create_subsystem", 00:17:42.753 "req_id": 1 00:17:42.753 } 00:17:42.753 Got JSON-RPC error response 00:17:42.753 response: 00:17:42.753 { 00:17:42.753 "code": -32602, 00:17:42.753 "message": "Invalid cntlid range [0-65519]" 00:17:42.753 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:42.753 00:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10713 -i 65520 00:17:43.008 [2024-12-06 00:55:15.472205] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10713: invalid cntlid range [65520-65519] 00:17:43.008 00:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:43.008 { 00:17:43.008 "nqn": "nqn.2016-06.io.spdk:cnode10713", 00:17:43.008 "min_cntlid": 65520, 00:17:43.008 "method": "nvmf_create_subsystem", 00:17:43.008 "req_id": 1 00:17:43.008 } 00:17:43.008 Got JSON-RPC error response 00:17:43.008 response: 00:17:43.008 { 00:17:43.008 "code": -32602, 00:17:43.008 "message": "Invalid cntlid range [65520-65519]" 00:17:43.008 }' 00:17:43.009 00:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:43.009 { 00:17:43.009 "nqn": "nqn.2016-06.io.spdk:cnode10713", 00:17:43.009 "min_cntlid": 65520, 00:17:43.009 "method": "nvmf_create_subsystem", 00:17:43.009 "req_id": 1 00:17:43.009 } 00:17:43.009 Got JSON-RPC error response 00:17:43.009 response: 00:17:43.009 { 00:17:43.009 "code": -32602, 00:17:43.009 "message": "Invalid cntlid range [65520-65519]" 00:17:43.009 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:43.009 00:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4706 -I 0 00:17:43.265 [2024-12-06 00:55:15.741196] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4706: invalid cntlid range [1-0] 00:17:43.265 00:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:43.265 { 00:17:43.265 "nqn": "nqn.2016-06.io.spdk:cnode4706", 00:17:43.265 "max_cntlid": 0, 00:17:43.265 "method": "nvmf_create_subsystem", 00:17:43.265 "req_id": 1 00:17:43.265 } 00:17:43.265 Got JSON-RPC error response 00:17:43.265 response: 00:17:43.265 { 00:17:43.265 "code": -32602, 00:17:43.265 "message": "Invalid cntlid range [1-0]" 00:17:43.265 }' 00:17:43.265 00:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:43.265 { 00:17:43.265 "nqn": "nqn.2016-06.io.spdk:cnode4706", 00:17:43.265 "max_cntlid": 0, 00:17:43.265 "method": "nvmf_create_subsystem", 00:17:43.265 "req_id": 1 00:17:43.265 } 00:17:43.265 Got JSON-RPC error response 00:17:43.265 response: 00:17:43.265 { 00:17:43.265 "code": -32602, 00:17:43.265 "message": "Invalid cntlid range [1-0]" 00:17:43.265 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:43.265 00:55:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1531 -I 65520 00:17:43.522 [2024-12-06 00:55:16.014080] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1531: invalid cntlid range [1-65520] 00:17:43.522 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:43.522 { 00:17:43.522 "nqn": "nqn.2016-06.io.spdk:cnode1531", 00:17:43.522 "max_cntlid": 65520, 00:17:43.522 "method": "nvmf_create_subsystem", 00:17:43.522 "req_id": 1 00:17:43.522 } 00:17:43.522 Got JSON-RPC error response 00:17:43.522 response: 00:17:43.522 { 00:17:43.522 "code": -32602, 00:17:43.522 "message": "Invalid cntlid range [1-65520]" 00:17:43.522 }' 00:17:43.522 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:43.522 { 00:17:43.522 "nqn": "nqn.2016-06.io.spdk:cnode1531", 00:17:43.522 "max_cntlid": 65520, 00:17:43.522 "method": "nvmf_create_subsystem", 00:17:43.522 "req_id": 1 00:17:43.522 } 00:17:43.522 Got JSON-RPC error response 00:17:43.522 response: 00:17:43.522 { 00:17:43.522 "code": -32602, 00:17:43.522 "message": "Invalid cntlid range [1-65520]" 00:17:43.522 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:43.522 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22253 -i 6 -I 5 00:17:43.780 [2024-12-06 00:55:16.299052] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22253: invalid cntlid range [6-5] 00:17:43.780 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:43.780 { 00:17:43.780 "nqn": "nqn.2016-06.io.spdk:cnode22253", 00:17:43.780 "min_cntlid": 6, 00:17:43.780 "max_cntlid": 5, 00:17:43.780 "method": "nvmf_create_subsystem", 00:17:43.780 "req_id": 1 00:17:43.780 } 00:17:43.780 Got JSON-RPC error response 00:17:43.780 response: 00:17:43.780 { 00:17:43.780 "code": -32602, 00:17:43.780 "message": "Invalid cntlid range [6-5]" 00:17:43.780 }' 00:17:43.780 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:43.780 { 00:17:43.780 "nqn": "nqn.2016-06.io.spdk:cnode22253", 00:17:43.780 "min_cntlid": 6, 00:17:43.780 "max_cntlid": 5, 00:17:43.780 "method": "nvmf_create_subsystem", 00:17:43.780 "req_id": 1 00:17:43.780 } 00:17:43.780 Got JSON-RPC error response 00:17:43.780 response: 00:17:43.780 { 00:17:43.780 "code": -32602, 00:17:43.780 "message": "Invalid cntlid range [6-5]" 00:17:43.780 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:43.780 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:43.780 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:43.780 { 00:17:43.780 "name": "foobar", 00:17:43.780 "method": "nvmf_delete_target", 00:17:43.780 "req_id": 1 00:17:43.780 } 00:17:43.780 Got JSON-RPC error response 00:17:43.780 response: 00:17:43.780 { 00:17:43.780 "code": -32602, 00:17:43.780 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:43.780 }' 00:17:43.780 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:43.780 { 00:17:43.780 "name": "foobar", 00:17:43.780 "method": "nvmf_delete_target", 00:17:43.780 "req_id": 1 00:17:43.780 } 00:17:43.780 Got JSON-RPC error response 00:17:43.780 response: 00:17:43.780 { 00:17:43.780 "code": -32602, 00:17:43.780 "message": "The specified target doesn't exist, cannot delete it." 00:17:43.780 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:43.780 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:43.780 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:43.780 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:43.780 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:43.780 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:43.780 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:43.780 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:43.781 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:43.781 rmmod nvme_tcp 00:17:43.781 rmmod nvme_fabrics 00:17:43.781 rmmod nvme_keyring 00:17:44.039 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:44.039 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:44.039 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:44.039 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2946206 ']' 00:17:44.039 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2946206 00:17:44.039 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2946206 ']' 00:17:44.039 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2946206 00:17:44.039 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:44.039 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:44.039 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2946206 00:17:44.039 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:44.039 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:44.039 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2946206' 00:17:44.039 killing process with pid 2946206 00:17:44.039 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2946206 00:17:44.039 00:55:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2946206 00:17:44.974 00:55:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:44.974 00:55:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:44.974 00:55:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:44.974 00:55:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:44.974 00:55:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:44.974 00:55:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:44.974 00:55:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:44.974 00:55:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:44.974 00:55:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:44.974 00:55:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.974 00:55:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.974 00:55:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:47.508 00:17:47.508 real 0m10.568s 00:17:47.508 user 0m26.860s 00:17:47.508 sys 0m2.586s 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:47.508 ************************************ 00:17:47.508 END TEST nvmf_invalid 00:17:47.508 ************************************ 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:47.508 ************************************ 00:17:47.508 START TEST nvmf_connect_stress 00:17:47.508 ************************************ 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:47.508 * Looking for test storage... 00:17:47.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:47.508 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:47.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.509 --rc genhtml_branch_coverage=1 00:17:47.509 --rc genhtml_function_coverage=1 00:17:47.509 --rc genhtml_legend=1 00:17:47.509 --rc geninfo_all_blocks=1 00:17:47.509 --rc geninfo_unexecuted_blocks=1 00:17:47.509 00:17:47.509 ' 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:47.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.509 --rc genhtml_branch_coverage=1 00:17:47.509 --rc genhtml_function_coverage=1 00:17:47.509 --rc genhtml_legend=1 00:17:47.509 --rc geninfo_all_blocks=1 00:17:47.509 --rc geninfo_unexecuted_blocks=1 00:17:47.509 00:17:47.509 ' 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:47.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.509 --rc genhtml_branch_coverage=1 00:17:47.509 --rc genhtml_function_coverage=1 00:17:47.509 --rc genhtml_legend=1 00:17:47.509 --rc geninfo_all_blocks=1 00:17:47.509 --rc geninfo_unexecuted_blocks=1 00:17:47.509 00:17:47.509 ' 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:47.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.509 --rc genhtml_branch_coverage=1 00:17:47.509 --rc genhtml_function_coverage=1 00:17:47.509 --rc genhtml_legend=1 00:17:47.509 --rc geninfo_all_blocks=1 00:17:47.509 --rc geninfo_unexecuted_blocks=1 00:17:47.509 00:17:47.509 ' 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:47.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:47.509 00:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:49.413 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:49.413 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:49.413 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:49.413 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:49.414 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:49.414 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:49.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:17:49.672 00:17:49.672 --- 10.0.0.2 ping statistics --- 00:17:49.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.672 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:49.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:17:49.672 00:17:49.672 --- 10.0.0.1 ping statistics --- 00:17:49.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.672 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2949105 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2949105 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2949105 ']' 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.672 00:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.672 [2024-12-06 00:55:22.326732] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:17:49.672 [2024-12-06 00:55:22.326891] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.931 [2024-12-06 00:55:22.479681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:49.931 [2024-12-06 00:55:22.620061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.931 [2024-12-06 00:55:22.620145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.931 [2024-12-06 00:55:22.620172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.931 [2024-12-06 00:55:22.620197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.931 [2024-12-06 00:55:22.620216] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.931 [2024-12-06 00:55:22.622830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.931 [2024-12-06 00:55:22.622886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.931 [2024-12-06 00:55:22.622889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.866 [2024-12-06 00:55:23.358620] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.866 [2024-12-06 00:55:23.378981] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.866 NULL1 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2949255 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.866 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.125 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.125 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:51.125 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.125 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.125 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.693 00:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.693 00:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:51.693 00:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.693 00:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.693 00:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.952 00:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.952 00:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:51.952 00:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.952 00:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.952 00:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.210 00:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.210 00:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:52.210 00:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.210 00:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.210 00:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.469 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.469 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:52.469 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.469 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.469 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.726 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.726 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:52.726 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.726 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.726 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.290 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.290 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:53.290 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.290 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.290 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.547 00:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.547 00:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:53.547 00:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.547 00:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.547 00:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.804 00:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.804 00:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:53.804 00:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.804 00:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.804 00:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.062 00:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.062 00:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:54.062 00:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.062 00:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.062 00:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.319 00:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.319 00:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:54.319 00:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.319 00:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.319 00:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.884 00:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.884 00:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:54.884 00:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.885 00:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.885 00:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.142 00:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.142 00:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:55.142 00:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.142 00:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.142 00:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.401 00:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.401 00:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:55.401 00:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.401 00:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.401 00:55:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.659 00:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.659 00:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:55.659 00:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.660 00:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.660 00:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.225 00:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.225 00:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:56.225 00:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.225 00:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.225 00:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.483 00:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.483 00:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:56.483 00:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.483 00:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.483 00:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.741 00:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.741 00:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:56.741 00:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.741 00:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.741 00:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.999 00:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.999 00:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:56.999 00:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.999 00:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.999 00:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.256 00:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.256 00:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:57.256 00:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.256 00:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.256 00:55:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.819 00:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.819 00:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:57.819 00:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.820 00:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.820 00:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.077 00:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.077 00:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:58.077 00:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.077 00:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.077 00:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.334 00:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.334 00:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:58.334 00:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.334 00:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.334 00:55:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.592 00:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.592 00:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:58.592 00:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.592 00:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.592 00:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.155 00:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.155 00:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:59.155 00:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.155 00:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.155 00:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.412 00:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.412 00:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:59.412 00:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.412 00:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.412 00:55:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.670 00:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.670 00:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:59.670 00:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.670 00:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.670 00:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.928 00:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.928 00:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:17:59.928 00:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.928 00:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.928 00:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.185 00:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.185 00:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:18:00.185 00:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.185 00:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.185 00:55:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.750 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.750 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:18:00.750 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.750 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.750 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.008 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.008 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:18:01.008 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.008 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.008 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.008 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:01.266 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.266 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949255 00:18:01.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2949255) - No such process 00:18:01.266 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2949255 00:18:01.266 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:01.266 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:01.266 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:01.266 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:01.266 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:01.266 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:01.266 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:01.266 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:01.267 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:01.267 rmmod nvme_tcp 00:18:01.267 rmmod nvme_fabrics 00:18:01.267 rmmod nvme_keyring 00:18:01.267 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:01.267 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:01.267 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:01.267 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2949105 ']' 00:18:01.267 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2949105 00:18:01.267 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2949105 ']' 00:18:01.267 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2949105 00:18:01.267 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:18:01.267 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.267 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2949105 00:18:01.267 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:01.267 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:01.267 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2949105' 00:18:01.267 killing process with pid 2949105 00:18:01.267 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2949105 00:18:01.267 00:55:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2949105 00:18:02.656 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:02.656 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:02.656 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:02.656 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:02.656 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:18:02.656 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:02.656 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:18:02.656 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:02.656 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:02.656 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.656 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.656 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.581 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:04.581 00:18:04.581 real 0m17.328s 00:18:04.581 user 0m43.074s 00:18:04.581 sys 0m6.103s 00:18:04.581 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.581 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:04.581 ************************************ 00:18:04.581 END TEST nvmf_connect_stress 00:18:04.581 ************************************ 00:18:04.581 00:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:04.581 00:55:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:04.581 00:55:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.581 00:55:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:04.581 ************************************ 00:18:04.581 START TEST nvmf_fused_ordering 00:18:04.581 ************************************ 00:18:04.581 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:04.581 * Looking for test storage... 00:18:04.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:04.581 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:04.581 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:18:04.581 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:04.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.840 --rc genhtml_branch_coverage=1 00:18:04.840 --rc genhtml_function_coverage=1 00:18:04.840 --rc genhtml_legend=1 00:18:04.840 --rc geninfo_all_blocks=1 00:18:04.840 --rc geninfo_unexecuted_blocks=1 00:18:04.840 00:18:04.840 ' 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:04.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.840 --rc genhtml_branch_coverage=1 00:18:04.840 --rc genhtml_function_coverage=1 00:18:04.840 --rc genhtml_legend=1 00:18:04.840 --rc geninfo_all_blocks=1 00:18:04.840 --rc geninfo_unexecuted_blocks=1 00:18:04.840 00:18:04.840 ' 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:04.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.840 --rc genhtml_branch_coverage=1 00:18:04.840 --rc genhtml_function_coverage=1 00:18:04.840 --rc genhtml_legend=1 00:18:04.840 --rc geninfo_all_blocks=1 00:18:04.840 --rc geninfo_unexecuted_blocks=1 00:18:04.840 00:18:04.840 ' 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:04.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.840 --rc genhtml_branch_coverage=1 00:18:04.840 --rc genhtml_function_coverage=1 00:18:04.840 --rc genhtml_legend=1 00:18:04.840 --rc geninfo_all_blocks=1 00:18:04.840 --rc geninfo_unexecuted_blocks=1 00:18:04.840 00:18:04.840 ' 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:04.840 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:04.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:04.841 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:06.758 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:06.758 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:06.758 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:07.019 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:07.019 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:07.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:18:07.019 00:18:07.019 --- 10.0.0.2 ping statistics --- 00:18:07.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.019 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:07.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:18:07.019 00:18:07.019 --- 10.0.0.1 ping statistics --- 00:18:07.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.019 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2952551 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2952551 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2952551 ']' 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.019 00:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:07.020 [2024-12-06 00:55:39.720433] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:18:07.020 [2024-12-06 00:55:39.720568] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.279 [2024-12-06 00:55:39.874519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.537 [2024-12-06 00:55:40.015190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.537 [2024-12-06 00:55:40.015277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.537 [2024-12-06 00:55:40.015304] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.537 [2024-12-06 00:55:40.015330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.537 [2024-12-06 00:55:40.015350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.537 [2024-12-06 00:55:40.017577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:08.103 [2024-12-06 00:55:40.682723] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:08.103 [2024-12-06 00:55:40.699012] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:08.103 NULL1 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.103 00:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:08.103 [2024-12-06 00:55:40.774790] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:18:08.103 [2024-12-06 00:55:40.774915] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2952707 ] 00:18:08.669 Attached to nqn.2016-06.io.spdk:cnode1 00:18:08.669 Namespace ID: 1 size: 1GB 00:18:08.669 fused_ordering(0) 00:18:08.669 fused_ordering(1) 00:18:08.669 fused_ordering(2) 00:18:08.669 fused_ordering(3) 00:18:08.669 fused_ordering(4) 00:18:08.669 fused_ordering(5) 00:18:08.669 fused_ordering(6) 00:18:08.669 fused_ordering(7) 00:18:08.669 fused_ordering(8) 00:18:08.669 fused_ordering(9) 00:18:08.669 fused_ordering(10) 00:18:08.669 fused_ordering(11) 00:18:08.669 fused_ordering(12) 00:18:08.669 fused_ordering(13) 00:18:08.669 fused_ordering(14) 00:18:08.669 fused_ordering(15) 00:18:08.669 fused_ordering(16) 00:18:08.669 fused_ordering(17) 00:18:08.669 fused_ordering(18) 00:18:08.669 fused_ordering(19) 00:18:08.669 fused_ordering(20) 00:18:08.669 fused_ordering(21) 00:18:08.669 fused_ordering(22) 00:18:08.669 fused_ordering(23) 00:18:08.669 fused_ordering(24) 00:18:08.669 fused_ordering(25) 00:18:08.669 fused_ordering(26) 00:18:08.669 fused_ordering(27) 00:18:08.669 fused_ordering(28) 00:18:08.669 fused_ordering(29) 00:18:08.669 fused_ordering(30) 00:18:08.669 fused_ordering(31) 00:18:08.669 fused_ordering(32) 00:18:08.669 fused_ordering(33) 00:18:08.669 fused_ordering(34) 00:18:08.669 fused_ordering(35) 00:18:08.669 fused_ordering(36) 00:18:08.669 fused_ordering(37) 00:18:08.669 fused_ordering(38) 00:18:08.669 fused_ordering(39) 00:18:08.669 fused_ordering(40) 00:18:08.669 fused_ordering(41) 00:18:08.669 fused_ordering(42) 00:18:08.669 fused_ordering(43) 00:18:08.669 fused_ordering(44) 00:18:08.669 fused_ordering(45) 00:18:08.669 fused_ordering(46) 00:18:08.669 fused_ordering(47) 00:18:08.669 fused_ordering(48) 00:18:08.669 fused_ordering(49) 00:18:08.669 fused_ordering(50) 00:18:08.669 fused_ordering(51) 00:18:08.669 fused_ordering(52) 00:18:08.669 fused_ordering(53) 00:18:08.669 fused_ordering(54) 00:18:08.669 fused_ordering(55) 00:18:08.669 fused_ordering(56) 00:18:08.669 fused_ordering(57) 00:18:08.669 fused_ordering(58) 00:18:08.669 fused_ordering(59) 00:18:08.669 fused_ordering(60) 00:18:08.669 fused_ordering(61) 00:18:08.669 fused_ordering(62) 00:18:08.669 fused_ordering(63) 00:18:08.669 fused_ordering(64) 00:18:08.669 fused_ordering(65) 00:18:08.669 fused_ordering(66) 00:18:08.669 fused_ordering(67) 00:18:08.669 fused_ordering(68) 00:18:08.669 fused_ordering(69) 00:18:08.669 fused_ordering(70) 00:18:08.669 fused_ordering(71) 00:18:08.669 fused_ordering(72) 00:18:08.669 fused_ordering(73) 00:18:08.669 fused_ordering(74) 00:18:08.669 fused_ordering(75) 00:18:08.669 fused_ordering(76) 00:18:08.669 fused_ordering(77) 00:18:08.669 fused_ordering(78) 00:18:08.669 fused_ordering(79) 00:18:08.669 fused_ordering(80) 00:18:08.669 fused_ordering(81) 00:18:08.669 fused_ordering(82) 00:18:08.669 fused_ordering(83) 00:18:08.669 fused_ordering(84) 00:18:08.669 fused_ordering(85) 00:18:08.669 fused_ordering(86) 00:18:08.669 fused_ordering(87) 00:18:08.669 fused_ordering(88) 00:18:08.669 fused_ordering(89) 00:18:08.669 fused_ordering(90) 00:18:08.669 fused_ordering(91) 00:18:08.669 fused_ordering(92) 00:18:08.669 fused_ordering(93) 00:18:08.669 fused_ordering(94) 00:18:08.669 fused_ordering(95) 00:18:08.669 fused_ordering(96) 00:18:08.669 fused_ordering(97) 00:18:08.669 fused_ordering(98) 00:18:08.669 fused_ordering(99) 00:18:08.669 fused_ordering(100) 00:18:08.669 fused_ordering(101) 00:18:08.669 fused_ordering(102) 00:18:08.669 fused_ordering(103) 00:18:08.669 fused_ordering(104) 00:18:08.669 fused_ordering(105) 00:18:08.669 fused_ordering(106) 00:18:08.669 fused_ordering(107) 00:18:08.669 fused_ordering(108) 00:18:08.669 fused_ordering(109) 00:18:08.669 fused_ordering(110) 00:18:08.669 fused_ordering(111) 00:18:08.669 fused_ordering(112) 00:18:08.669 fused_ordering(113) 00:18:08.669 fused_ordering(114) 00:18:08.669 fused_ordering(115) 00:18:08.669 fused_ordering(116) 00:18:08.669 fused_ordering(117) 00:18:08.669 fused_ordering(118) 00:18:08.669 fused_ordering(119) 00:18:08.669 fused_ordering(120) 00:18:08.669 fused_ordering(121) 00:18:08.669 fused_ordering(122) 00:18:08.669 fused_ordering(123) 00:18:08.669 fused_ordering(124) 00:18:08.669 fused_ordering(125) 00:18:08.669 fused_ordering(126) 00:18:08.669 fused_ordering(127) 00:18:08.669 fused_ordering(128) 00:18:08.669 fused_ordering(129) 00:18:08.669 fused_ordering(130) 00:18:08.669 fused_ordering(131) 00:18:08.669 fused_ordering(132) 00:18:08.669 fused_ordering(133) 00:18:08.669 fused_ordering(134) 00:18:08.669 fused_ordering(135) 00:18:08.669 fused_ordering(136) 00:18:08.669 fused_ordering(137) 00:18:08.669 fused_ordering(138) 00:18:08.669 fused_ordering(139) 00:18:08.669 fused_ordering(140) 00:18:08.669 fused_ordering(141) 00:18:08.669 fused_ordering(142) 00:18:08.669 fused_ordering(143) 00:18:08.669 fused_ordering(144) 00:18:08.669 fused_ordering(145) 00:18:08.669 fused_ordering(146) 00:18:08.669 fused_ordering(147) 00:18:08.669 fused_ordering(148) 00:18:08.669 fused_ordering(149) 00:18:08.669 fused_ordering(150) 00:18:08.669 fused_ordering(151) 00:18:08.669 fused_ordering(152) 00:18:08.669 fused_ordering(153) 00:18:08.669 fused_ordering(154) 00:18:08.669 fused_ordering(155) 00:18:08.669 fused_ordering(156) 00:18:08.669 fused_ordering(157) 00:18:08.669 fused_ordering(158) 00:18:08.669 fused_ordering(159) 00:18:08.669 fused_ordering(160) 00:18:08.669 fused_ordering(161) 00:18:08.669 fused_ordering(162) 00:18:08.669 fused_ordering(163) 00:18:08.669 fused_ordering(164) 00:18:08.669 fused_ordering(165) 00:18:08.669 fused_ordering(166) 00:18:08.669 fused_ordering(167) 00:18:08.669 fused_ordering(168) 00:18:08.669 fused_ordering(169) 00:18:08.669 fused_ordering(170) 00:18:08.669 fused_ordering(171) 00:18:08.669 fused_ordering(172) 00:18:08.669 fused_ordering(173) 00:18:08.669 fused_ordering(174) 00:18:08.669 fused_ordering(175) 00:18:08.669 fused_ordering(176) 00:18:08.669 fused_ordering(177) 00:18:08.669 fused_ordering(178) 00:18:08.669 fused_ordering(179) 00:18:08.669 fused_ordering(180) 00:18:08.669 fused_ordering(181) 00:18:08.669 fused_ordering(182) 00:18:08.669 fused_ordering(183) 00:18:08.669 fused_ordering(184) 00:18:08.669 fused_ordering(185) 00:18:08.669 fused_ordering(186) 00:18:08.669 fused_ordering(187) 00:18:08.669 fused_ordering(188) 00:18:08.669 fused_ordering(189) 00:18:08.669 fused_ordering(190) 00:18:08.669 fused_ordering(191) 00:18:08.669 fused_ordering(192) 00:18:08.669 fused_ordering(193) 00:18:08.669 fused_ordering(194) 00:18:08.669 fused_ordering(195) 00:18:08.669 fused_ordering(196) 00:18:08.669 fused_ordering(197) 00:18:08.669 fused_ordering(198) 00:18:08.669 fused_ordering(199) 00:18:08.669 fused_ordering(200) 00:18:08.669 fused_ordering(201) 00:18:08.669 fused_ordering(202) 00:18:08.669 fused_ordering(203) 00:18:08.669 fused_ordering(204) 00:18:08.669 fused_ordering(205) 00:18:09.237 fused_ordering(206) 00:18:09.237 fused_ordering(207) 00:18:09.237 fused_ordering(208) 00:18:09.237 fused_ordering(209) 00:18:09.237 fused_ordering(210) 00:18:09.237 fused_ordering(211) 00:18:09.237 fused_ordering(212) 00:18:09.237 fused_ordering(213) 00:18:09.237 fused_ordering(214) 00:18:09.237 fused_ordering(215) 00:18:09.237 fused_ordering(216) 00:18:09.237 fused_ordering(217) 00:18:09.237 fused_ordering(218) 00:18:09.237 fused_ordering(219) 00:18:09.237 fused_ordering(220) 00:18:09.237 fused_ordering(221) 00:18:09.237 fused_ordering(222) 00:18:09.237 fused_ordering(223) 00:18:09.237 fused_ordering(224) 00:18:09.237 fused_ordering(225) 00:18:09.237 fused_ordering(226) 00:18:09.237 fused_ordering(227) 00:18:09.237 fused_ordering(228) 00:18:09.237 fused_ordering(229) 00:18:09.237 fused_ordering(230) 00:18:09.237 fused_ordering(231) 00:18:09.237 fused_ordering(232) 00:18:09.237 fused_ordering(233) 00:18:09.237 fused_ordering(234) 00:18:09.237 fused_ordering(235) 00:18:09.237 fused_ordering(236) 00:18:09.237 fused_ordering(237) 00:18:09.237 fused_ordering(238) 00:18:09.237 fused_ordering(239) 00:18:09.237 fused_ordering(240) 00:18:09.237 fused_ordering(241) 00:18:09.237 fused_ordering(242) 00:18:09.237 fused_ordering(243) 00:18:09.237 fused_ordering(244) 00:18:09.237 fused_ordering(245) 00:18:09.237 fused_ordering(246) 00:18:09.237 fused_ordering(247) 00:18:09.237 fused_ordering(248) 00:18:09.237 fused_ordering(249) 00:18:09.237 fused_ordering(250) 00:18:09.237 fused_ordering(251) 00:18:09.237 fused_ordering(252) 00:18:09.237 fused_ordering(253) 00:18:09.237 fused_ordering(254) 00:18:09.237 fused_ordering(255) 00:18:09.237 fused_ordering(256) 00:18:09.237 fused_ordering(257) 00:18:09.237 fused_ordering(258) 00:18:09.237 fused_ordering(259) 00:18:09.237 fused_ordering(260) 00:18:09.237 fused_ordering(261) 00:18:09.237 fused_ordering(262) 00:18:09.237 fused_ordering(263) 00:18:09.237 fused_ordering(264) 00:18:09.237 fused_ordering(265) 00:18:09.237 fused_ordering(266) 00:18:09.237 fused_ordering(267) 00:18:09.237 fused_ordering(268) 00:18:09.237 fused_ordering(269) 00:18:09.237 fused_ordering(270) 00:18:09.237 fused_ordering(271) 00:18:09.237 fused_ordering(272) 00:18:09.237 fused_ordering(273) 00:18:09.237 fused_ordering(274) 00:18:09.237 fused_ordering(275) 00:18:09.237 fused_ordering(276) 00:18:09.237 fused_ordering(277) 00:18:09.237 fused_ordering(278) 00:18:09.237 fused_ordering(279) 00:18:09.237 fused_ordering(280) 00:18:09.237 fused_ordering(281) 00:18:09.237 fused_ordering(282) 00:18:09.237 fused_ordering(283) 00:18:09.237 fused_ordering(284) 00:18:09.237 fused_ordering(285) 00:18:09.237 fused_ordering(286) 00:18:09.237 fused_ordering(287) 00:18:09.237 fused_ordering(288) 00:18:09.237 fused_ordering(289) 00:18:09.237 fused_ordering(290) 00:18:09.237 fused_ordering(291) 00:18:09.237 fused_ordering(292) 00:18:09.237 fused_ordering(293) 00:18:09.237 fused_ordering(294) 00:18:09.237 fused_ordering(295) 00:18:09.237 fused_ordering(296) 00:18:09.237 fused_ordering(297) 00:18:09.237 fused_ordering(298) 00:18:09.237 fused_ordering(299) 00:18:09.237 fused_ordering(300) 00:18:09.237 fused_ordering(301) 00:18:09.237 fused_ordering(302) 00:18:09.237 fused_ordering(303) 00:18:09.237 fused_ordering(304) 00:18:09.237 fused_ordering(305) 00:18:09.237 fused_ordering(306) 00:18:09.237 fused_ordering(307) 00:18:09.237 fused_ordering(308) 00:18:09.237 fused_ordering(309) 00:18:09.237 fused_ordering(310) 00:18:09.237 fused_ordering(311) 00:18:09.237 fused_ordering(312) 00:18:09.237 fused_ordering(313) 00:18:09.237 fused_ordering(314) 00:18:09.237 fused_ordering(315) 00:18:09.237 fused_ordering(316) 00:18:09.237 fused_ordering(317) 00:18:09.237 fused_ordering(318) 00:18:09.237 fused_ordering(319) 00:18:09.237 fused_ordering(320) 00:18:09.237 fused_ordering(321) 00:18:09.237 fused_ordering(322) 00:18:09.237 fused_ordering(323) 00:18:09.237 fused_ordering(324) 00:18:09.237 fused_ordering(325) 00:18:09.237 fused_ordering(326) 00:18:09.237 fused_ordering(327) 00:18:09.237 fused_ordering(328) 00:18:09.237 fused_ordering(329) 00:18:09.237 fused_ordering(330) 00:18:09.237 fused_ordering(331) 00:18:09.237 fused_ordering(332) 00:18:09.237 fused_ordering(333) 00:18:09.238 fused_ordering(334) 00:18:09.238 fused_ordering(335) 00:18:09.238 fused_ordering(336) 00:18:09.238 fused_ordering(337) 00:18:09.238 fused_ordering(338) 00:18:09.238 fused_ordering(339) 00:18:09.238 fused_ordering(340) 00:18:09.238 fused_ordering(341) 00:18:09.238 fused_ordering(342) 00:18:09.238 fused_ordering(343) 00:18:09.238 fused_ordering(344) 00:18:09.238 fused_ordering(345) 00:18:09.238 fused_ordering(346) 00:18:09.238 fused_ordering(347) 00:18:09.238 fused_ordering(348) 00:18:09.238 fused_ordering(349) 00:18:09.238 fused_ordering(350) 00:18:09.238 fused_ordering(351) 00:18:09.238 fused_ordering(352) 00:18:09.238 fused_ordering(353) 00:18:09.238 fused_ordering(354) 00:18:09.238 fused_ordering(355) 00:18:09.238 fused_ordering(356) 00:18:09.238 fused_ordering(357) 00:18:09.238 fused_ordering(358) 00:18:09.238 fused_ordering(359) 00:18:09.238 fused_ordering(360) 00:18:09.238 fused_ordering(361) 00:18:09.238 fused_ordering(362) 00:18:09.238 fused_ordering(363) 00:18:09.238 fused_ordering(364) 00:18:09.238 fused_ordering(365) 00:18:09.238 fused_ordering(366) 00:18:09.238 fused_ordering(367) 00:18:09.238 fused_ordering(368) 00:18:09.238 fused_ordering(369) 00:18:09.238 fused_ordering(370) 00:18:09.238 fused_ordering(371) 00:18:09.238 fused_ordering(372) 00:18:09.238 fused_ordering(373) 00:18:09.238 fused_ordering(374) 00:18:09.238 fused_ordering(375) 00:18:09.238 fused_ordering(376) 00:18:09.238 fused_ordering(377) 00:18:09.238 fused_ordering(378) 00:18:09.238 fused_ordering(379) 00:18:09.238 fused_ordering(380) 00:18:09.238 fused_ordering(381) 00:18:09.238 fused_ordering(382) 00:18:09.238 fused_ordering(383) 00:18:09.238 fused_ordering(384) 00:18:09.238 fused_ordering(385) 00:18:09.238 fused_ordering(386) 00:18:09.238 fused_ordering(387) 00:18:09.238 fused_ordering(388) 00:18:09.238 fused_ordering(389) 00:18:09.238 fused_ordering(390) 00:18:09.238 fused_ordering(391) 00:18:09.238 fused_ordering(392) 00:18:09.238 fused_ordering(393) 00:18:09.238 fused_ordering(394) 00:18:09.238 fused_ordering(395) 00:18:09.238 fused_ordering(396) 00:18:09.238 fused_ordering(397) 00:18:09.238 fused_ordering(398) 00:18:09.238 fused_ordering(399) 00:18:09.238 fused_ordering(400) 00:18:09.238 fused_ordering(401) 00:18:09.238 fused_ordering(402) 00:18:09.238 fused_ordering(403) 00:18:09.238 fused_ordering(404) 00:18:09.238 fused_ordering(405) 00:18:09.238 fused_ordering(406) 00:18:09.238 fused_ordering(407) 00:18:09.238 fused_ordering(408) 00:18:09.238 fused_ordering(409) 00:18:09.238 fused_ordering(410) 00:18:09.804 fused_ordering(411) 00:18:09.804 fused_ordering(412) 00:18:09.804 fused_ordering(413) 00:18:09.804 fused_ordering(414) 00:18:09.804 fused_ordering(415) 00:18:09.804 fused_ordering(416) 00:18:09.804 fused_ordering(417) 00:18:09.804 fused_ordering(418) 00:18:09.804 fused_ordering(419) 00:18:09.804 fused_ordering(420) 00:18:09.804 fused_ordering(421) 00:18:09.804 fused_ordering(422) 00:18:09.804 fused_ordering(423) 00:18:09.804 fused_ordering(424) 00:18:09.804 fused_ordering(425) 00:18:09.804 fused_ordering(426) 00:18:09.804 fused_ordering(427) 00:18:09.804 fused_ordering(428) 00:18:09.804 fused_ordering(429) 00:18:09.804 fused_ordering(430) 00:18:09.804 fused_ordering(431) 00:18:09.804 fused_ordering(432) 00:18:09.804 fused_ordering(433) 00:18:09.804 fused_ordering(434) 00:18:09.804 fused_ordering(435) 00:18:09.804 fused_ordering(436) 00:18:09.804 fused_ordering(437) 00:18:09.804 fused_ordering(438) 00:18:09.804 fused_ordering(439) 00:18:09.804 fused_ordering(440) 00:18:09.804 fused_ordering(441) 00:18:09.804 fused_ordering(442) 00:18:09.804 fused_ordering(443) 00:18:09.804 fused_ordering(444) 00:18:09.804 fused_ordering(445) 00:18:09.804 fused_ordering(446) 00:18:09.804 fused_ordering(447) 00:18:09.804 fused_ordering(448) 00:18:09.804 fused_ordering(449) 00:18:09.804 fused_ordering(450) 00:18:09.804 fused_ordering(451) 00:18:09.804 fused_ordering(452) 00:18:09.804 fused_ordering(453) 00:18:09.804 fused_ordering(454) 00:18:09.804 fused_ordering(455) 00:18:09.804 fused_ordering(456) 00:18:09.804 fused_ordering(457) 00:18:09.804 fused_ordering(458) 00:18:09.804 fused_ordering(459) 00:18:09.804 fused_ordering(460) 00:18:09.804 fused_ordering(461) 00:18:09.804 fused_ordering(462) 00:18:09.804 fused_ordering(463) 00:18:09.804 fused_ordering(464) 00:18:09.804 fused_ordering(465) 00:18:09.804 fused_ordering(466) 00:18:09.804 fused_ordering(467) 00:18:09.804 fused_ordering(468) 00:18:09.804 fused_ordering(469) 00:18:09.804 fused_ordering(470) 00:18:09.804 fused_ordering(471) 00:18:09.804 fused_ordering(472) 00:18:09.804 fused_ordering(473) 00:18:09.804 fused_ordering(474) 00:18:09.804 fused_ordering(475) 00:18:09.804 fused_ordering(476) 00:18:09.804 fused_ordering(477) 00:18:09.804 fused_ordering(478) 00:18:09.804 fused_ordering(479) 00:18:09.804 fused_ordering(480) 00:18:09.805 fused_ordering(481) 00:18:09.805 fused_ordering(482) 00:18:09.805 fused_ordering(483) 00:18:09.805 fused_ordering(484) 00:18:09.805 fused_ordering(485) 00:18:09.805 fused_ordering(486) 00:18:09.805 fused_ordering(487) 00:18:09.805 fused_ordering(488) 00:18:09.805 fused_ordering(489) 00:18:09.805 fused_ordering(490) 00:18:09.805 fused_ordering(491) 00:18:09.805 fused_ordering(492) 00:18:09.805 fused_ordering(493) 00:18:09.805 fused_ordering(494) 00:18:09.805 fused_ordering(495) 00:18:09.805 fused_ordering(496) 00:18:09.805 fused_ordering(497) 00:18:09.805 fused_ordering(498) 00:18:09.805 fused_ordering(499) 00:18:09.805 fused_ordering(500) 00:18:09.805 fused_ordering(501) 00:18:09.805 fused_ordering(502) 00:18:09.805 fused_ordering(503) 00:18:09.805 fused_ordering(504) 00:18:09.805 fused_ordering(505) 00:18:09.805 fused_ordering(506) 00:18:09.805 fused_ordering(507) 00:18:09.805 fused_ordering(508) 00:18:09.805 fused_ordering(509) 00:18:09.805 fused_ordering(510) 00:18:09.805 fused_ordering(511) 00:18:09.805 fused_ordering(512) 00:18:09.805 fused_ordering(513) 00:18:09.805 fused_ordering(514) 00:18:09.805 fused_ordering(515) 00:18:09.805 fused_ordering(516) 00:18:09.805 fused_ordering(517) 00:18:09.805 fused_ordering(518) 00:18:09.805 fused_ordering(519) 00:18:09.805 fused_ordering(520) 00:18:09.805 fused_ordering(521) 00:18:09.805 fused_ordering(522) 00:18:09.805 fused_ordering(523) 00:18:09.805 fused_ordering(524) 00:18:09.805 fused_ordering(525) 00:18:09.805 fused_ordering(526) 00:18:09.805 fused_ordering(527) 00:18:09.805 fused_ordering(528) 00:18:09.805 fused_ordering(529) 00:18:09.805 fused_ordering(530) 00:18:09.805 fused_ordering(531) 00:18:09.805 fused_ordering(532) 00:18:09.805 fused_ordering(533) 00:18:09.805 fused_ordering(534) 00:18:09.805 fused_ordering(535) 00:18:09.805 fused_ordering(536) 00:18:09.805 fused_ordering(537) 00:18:09.805 fused_ordering(538) 00:18:09.805 fused_ordering(539) 00:18:09.805 fused_ordering(540) 00:18:09.805 fused_ordering(541) 00:18:09.805 fused_ordering(542) 00:18:09.805 fused_ordering(543) 00:18:09.805 fused_ordering(544) 00:18:09.805 fused_ordering(545) 00:18:09.805 fused_ordering(546) 00:18:09.805 fused_ordering(547) 00:18:09.805 fused_ordering(548) 00:18:09.805 fused_ordering(549) 00:18:09.805 fused_ordering(550) 00:18:09.805 fused_ordering(551) 00:18:09.805 fused_ordering(552) 00:18:09.805 fused_ordering(553) 00:18:09.805 fused_ordering(554) 00:18:09.805 fused_ordering(555) 00:18:09.805 fused_ordering(556) 00:18:09.805 fused_ordering(557) 00:18:09.805 fused_ordering(558) 00:18:09.805 fused_ordering(559) 00:18:09.805 fused_ordering(560) 00:18:09.805 fused_ordering(561) 00:18:09.805 fused_ordering(562) 00:18:09.805 fused_ordering(563) 00:18:09.805 fused_ordering(564) 00:18:09.805 fused_ordering(565) 00:18:09.805 fused_ordering(566) 00:18:09.805 fused_ordering(567) 00:18:09.805 fused_ordering(568) 00:18:09.805 fused_ordering(569) 00:18:09.805 fused_ordering(570) 00:18:09.805 fused_ordering(571) 00:18:09.805 fused_ordering(572) 00:18:09.805 fused_ordering(573) 00:18:09.805 fused_ordering(574) 00:18:09.805 fused_ordering(575) 00:18:09.805 fused_ordering(576) 00:18:09.805 fused_ordering(577) 00:18:09.805 fused_ordering(578) 00:18:09.805 fused_ordering(579) 00:18:09.805 fused_ordering(580) 00:18:09.805 fused_ordering(581) 00:18:09.805 fused_ordering(582) 00:18:09.805 fused_ordering(583) 00:18:09.805 fused_ordering(584) 00:18:09.805 fused_ordering(585) 00:18:09.805 fused_ordering(586) 00:18:09.805 fused_ordering(587) 00:18:09.805 fused_ordering(588) 00:18:09.805 fused_ordering(589) 00:18:09.805 fused_ordering(590) 00:18:09.805 fused_ordering(591) 00:18:09.805 fused_ordering(592) 00:18:09.805 fused_ordering(593) 00:18:09.805 fused_ordering(594) 00:18:09.805 fused_ordering(595) 00:18:09.805 fused_ordering(596) 00:18:09.805 fused_ordering(597) 00:18:09.805 fused_ordering(598) 00:18:09.805 fused_ordering(599) 00:18:09.805 fused_ordering(600) 00:18:09.805 fused_ordering(601) 00:18:09.805 fused_ordering(602) 00:18:09.805 fused_ordering(603) 00:18:09.805 fused_ordering(604) 00:18:09.805 fused_ordering(605) 00:18:09.805 fused_ordering(606) 00:18:09.805 fused_ordering(607) 00:18:09.805 fused_ordering(608) 00:18:09.805 fused_ordering(609) 00:18:09.805 fused_ordering(610) 00:18:09.805 fused_ordering(611) 00:18:09.805 fused_ordering(612) 00:18:09.805 fused_ordering(613) 00:18:09.805 fused_ordering(614) 00:18:09.805 fused_ordering(615) 00:18:10.740 fused_ordering(616) 00:18:10.740 fused_ordering(617) 00:18:10.740 fused_ordering(618) 00:18:10.740 fused_ordering(619) 00:18:10.740 fused_ordering(620) 00:18:10.740 fused_ordering(621) 00:18:10.740 fused_ordering(622) 00:18:10.740 fused_ordering(623) 00:18:10.740 fused_ordering(624) 00:18:10.740 fused_ordering(625) 00:18:10.740 fused_ordering(626) 00:18:10.740 fused_ordering(627) 00:18:10.740 fused_ordering(628) 00:18:10.740 fused_ordering(629) 00:18:10.740 fused_ordering(630) 00:18:10.740 fused_ordering(631) 00:18:10.740 fused_ordering(632) 00:18:10.740 fused_ordering(633) 00:18:10.740 fused_ordering(634) 00:18:10.740 fused_ordering(635) 00:18:10.740 fused_ordering(636) 00:18:10.740 fused_ordering(637) 00:18:10.740 fused_ordering(638) 00:18:10.740 fused_ordering(639) 00:18:10.740 fused_ordering(640) 00:18:10.740 fused_ordering(641) 00:18:10.740 fused_ordering(642) 00:18:10.740 fused_ordering(643) 00:18:10.740 fused_ordering(644) 00:18:10.740 fused_ordering(645) 00:18:10.740 fused_ordering(646) 00:18:10.740 fused_ordering(647) 00:18:10.740 fused_ordering(648) 00:18:10.740 fused_ordering(649) 00:18:10.740 fused_ordering(650) 00:18:10.740 fused_ordering(651) 00:18:10.740 fused_ordering(652) 00:18:10.740 fused_ordering(653) 00:18:10.740 fused_ordering(654) 00:18:10.740 fused_ordering(655) 00:18:10.740 fused_ordering(656) 00:18:10.740 fused_ordering(657) 00:18:10.740 fused_ordering(658) 00:18:10.740 fused_ordering(659) 00:18:10.740 fused_ordering(660) 00:18:10.740 fused_ordering(661) 00:18:10.740 fused_ordering(662) 00:18:10.740 fused_ordering(663) 00:18:10.740 fused_ordering(664) 00:18:10.740 fused_ordering(665) 00:18:10.740 fused_ordering(666) 00:18:10.740 fused_ordering(667) 00:18:10.740 fused_ordering(668) 00:18:10.740 fused_ordering(669) 00:18:10.740 fused_ordering(670) 00:18:10.740 fused_ordering(671) 00:18:10.740 fused_ordering(672) 00:18:10.740 fused_ordering(673) 00:18:10.740 fused_ordering(674) 00:18:10.740 fused_ordering(675) 00:18:10.740 fused_ordering(676) 00:18:10.740 fused_ordering(677) 00:18:10.740 fused_ordering(678) 00:18:10.740 fused_ordering(679) 00:18:10.740 fused_ordering(680) 00:18:10.740 fused_ordering(681) 00:18:10.740 fused_ordering(682) 00:18:10.740 fused_ordering(683) 00:18:10.740 fused_ordering(684) 00:18:10.740 fused_ordering(685) 00:18:10.740 fused_ordering(686) 00:18:10.740 fused_ordering(687) 00:18:10.740 fused_ordering(688) 00:18:10.740 fused_ordering(689) 00:18:10.740 fused_ordering(690) 00:18:10.740 fused_ordering(691) 00:18:10.740 fused_ordering(692) 00:18:10.740 fused_ordering(693) 00:18:10.740 fused_ordering(694) 00:18:10.740 fused_ordering(695) 00:18:10.740 fused_ordering(696) 00:18:10.740 fused_ordering(697) 00:18:10.740 fused_ordering(698) 00:18:10.740 fused_ordering(699) 00:18:10.740 fused_ordering(700) 00:18:10.740 fused_ordering(701) 00:18:10.740 fused_ordering(702) 00:18:10.740 fused_ordering(703) 00:18:10.740 fused_ordering(704) 00:18:10.740 fused_ordering(705) 00:18:10.740 fused_ordering(706) 00:18:10.740 fused_ordering(707) 00:18:10.740 fused_ordering(708) 00:18:10.740 fused_ordering(709) 00:18:10.740 fused_ordering(710) 00:18:10.740 fused_ordering(711) 00:18:10.740 fused_ordering(712) 00:18:10.740 fused_ordering(713) 00:18:10.740 fused_ordering(714) 00:18:10.740 fused_ordering(715) 00:18:10.740 fused_ordering(716) 00:18:10.740 fused_ordering(717) 00:18:10.740 fused_ordering(718) 00:18:10.740 fused_ordering(719) 00:18:10.740 fused_ordering(720) 00:18:10.740 fused_ordering(721) 00:18:10.740 fused_ordering(722) 00:18:10.740 fused_ordering(723) 00:18:10.740 fused_ordering(724) 00:18:10.740 fused_ordering(725) 00:18:10.740 fused_ordering(726) 00:18:10.740 fused_ordering(727) 00:18:10.740 fused_ordering(728) 00:18:10.740 fused_ordering(729) 00:18:10.740 fused_ordering(730) 00:18:10.740 fused_ordering(731) 00:18:10.740 fused_ordering(732) 00:18:10.740 fused_ordering(733) 00:18:10.740 fused_ordering(734) 00:18:10.740 fused_ordering(735) 00:18:10.740 fused_ordering(736) 00:18:10.740 fused_ordering(737) 00:18:10.740 fused_ordering(738) 00:18:10.740 fused_ordering(739) 00:18:10.740 fused_ordering(740) 00:18:10.740 fused_ordering(741) 00:18:10.740 fused_ordering(742) 00:18:10.740 fused_ordering(743) 00:18:10.740 fused_ordering(744) 00:18:10.740 fused_ordering(745) 00:18:10.740 fused_ordering(746) 00:18:10.740 fused_ordering(747) 00:18:10.740 fused_ordering(748) 00:18:10.741 fused_ordering(749) 00:18:10.741 fused_ordering(750) 00:18:10.741 fused_ordering(751) 00:18:10.741 fused_ordering(752) 00:18:10.741 fused_ordering(753) 00:18:10.741 fused_ordering(754) 00:18:10.741 fused_ordering(755) 00:18:10.741 fused_ordering(756) 00:18:10.741 fused_ordering(757) 00:18:10.741 fused_ordering(758) 00:18:10.741 fused_ordering(759) 00:18:10.741 fused_ordering(760) 00:18:10.741 fused_ordering(761) 00:18:10.741 fused_ordering(762) 00:18:10.741 fused_ordering(763) 00:18:10.741 fused_ordering(764) 00:18:10.741 fused_ordering(765) 00:18:10.741 fused_ordering(766) 00:18:10.741 fused_ordering(767) 00:18:10.741 fused_ordering(768) 00:18:10.741 fused_ordering(769) 00:18:10.741 fused_ordering(770) 00:18:10.741 fused_ordering(771) 00:18:10.741 fused_ordering(772) 00:18:10.741 fused_ordering(773) 00:18:10.741 fused_ordering(774) 00:18:10.741 fused_ordering(775) 00:18:10.741 fused_ordering(776) 00:18:10.741 fused_ordering(777) 00:18:10.741 fused_ordering(778) 00:18:10.741 fused_ordering(779) 00:18:10.741 fused_ordering(780) 00:18:10.741 fused_ordering(781) 00:18:10.741 fused_ordering(782) 00:18:10.741 fused_ordering(783) 00:18:10.741 fused_ordering(784) 00:18:10.741 fused_ordering(785) 00:18:10.741 fused_ordering(786) 00:18:10.741 fused_ordering(787) 00:18:10.741 fused_ordering(788) 00:18:10.741 fused_ordering(789) 00:18:10.741 fused_ordering(790) 00:18:10.741 fused_ordering(791) 00:18:10.741 fused_ordering(792) 00:18:10.741 fused_ordering(793) 00:18:10.741 fused_ordering(794) 00:18:10.741 fused_ordering(795) 00:18:10.741 fused_ordering(796) 00:18:10.741 fused_ordering(797) 00:18:10.741 fused_ordering(798) 00:18:10.741 fused_ordering(799) 00:18:10.741 fused_ordering(800) 00:18:10.741 fused_ordering(801) 00:18:10.741 fused_ordering(802) 00:18:10.741 fused_ordering(803) 00:18:10.741 fused_ordering(804) 00:18:10.741 fused_ordering(805) 00:18:10.741 fused_ordering(806) 00:18:10.741 fused_ordering(807) 00:18:10.741 fused_ordering(808) 00:18:10.741 fused_ordering(809) 00:18:10.741 fused_ordering(810) 00:18:10.741 fused_ordering(811) 00:18:10.741 fused_ordering(812) 00:18:10.741 fused_ordering(813) 00:18:10.741 fused_ordering(814) 00:18:10.741 fused_ordering(815) 00:18:10.741 fused_ordering(816) 00:18:10.741 fused_ordering(817) 00:18:10.741 fused_ordering(818) 00:18:10.741 fused_ordering(819) 00:18:10.741 fused_ordering(820) 00:18:11.308 fused_ordering(821) 00:18:11.308 fused_ordering(822) 00:18:11.308 fused_ordering(823) 00:18:11.308 fused_ordering(824) 00:18:11.308 fused_ordering(825) 00:18:11.308 fused_ordering(826) 00:18:11.308 fused_ordering(827) 00:18:11.308 fused_ordering(828) 00:18:11.308 fused_ordering(829) 00:18:11.308 fused_ordering(830) 00:18:11.308 fused_ordering(831) 00:18:11.308 fused_ordering(832) 00:18:11.308 fused_ordering(833) 00:18:11.308 fused_ordering(834) 00:18:11.308 fused_ordering(835) 00:18:11.308 fused_ordering(836) 00:18:11.308 fused_ordering(837) 00:18:11.308 fused_ordering(838) 00:18:11.308 fused_ordering(839) 00:18:11.308 fused_ordering(840) 00:18:11.308 fused_ordering(841) 00:18:11.308 fused_ordering(842) 00:18:11.308 fused_ordering(843) 00:18:11.308 fused_ordering(844) 00:18:11.308 fused_ordering(845) 00:18:11.308 fused_ordering(846) 00:18:11.308 fused_ordering(847) 00:18:11.308 fused_ordering(848) 00:18:11.308 fused_ordering(849) 00:18:11.308 fused_ordering(850) 00:18:11.308 fused_ordering(851) 00:18:11.308 fused_ordering(852) 00:18:11.308 fused_ordering(853) 00:18:11.308 fused_ordering(854) 00:18:11.308 fused_ordering(855) 00:18:11.308 fused_ordering(856) 00:18:11.308 fused_ordering(857) 00:18:11.308 fused_ordering(858) 00:18:11.308 fused_ordering(859) 00:18:11.308 fused_ordering(860) 00:18:11.308 fused_ordering(861) 00:18:11.308 fused_ordering(862) 00:18:11.308 fused_ordering(863) 00:18:11.308 fused_ordering(864) 00:18:11.308 fused_ordering(865) 00:18:11.308 fused_ordering(866) 00:18:11.308 fused_ordering(867) 00:18:11.308 fused_ordering(868) 00:18:11.308 fused_ordering(869) 00:18:11.308 fused_ordering(870) 00:18:11.308 fused_ordering(871) 00:18:11.308 fused_ordering(872) 00:18:11.308 fused_ordering(873) 00:18:11.308 fused_ordering(874) 00:18:11.308 fused_ordering(875) 00:18:11.308 fused_ordering(876) 00:18:11.308 fused_ordering(877) 00:18:11.308 fused_ordering(878) 00:18:11.308 fused_ordering(879) 00:18:11.308 fused_ordering(880) 00:18:11.308 fused_ordering(881) 00:18:11.308 fused_ordering(882) 00:18:11.308 fused_ordering(883) 00:18:11.308 fused_ordering(884) 00:18:11.308 fused_ordering(885) 00:18:11.308 fused_ordering(886) 00:18:11.308 fused_ordering(887) 00:18:11.308 fused_ordering(888) 00:18:11.308 fused_ordering(889) 00:18:11.308 fused_ordering(890) 00:18:11.308 fused_ordering(891) 00:18:11.308 fused_ordering(892) 00:18:11.308 fused_ordering(893) 00:18:11.308 fused_ordering(894) 00:18:11.308 fused_ordering(895) 00:18:11.308 fused_ordering(896) 00:18:11.308 fused_ordering(897) 00:18:11.308 fused_ordering(898) 00:18:11.308 fused_ordering(899) 00:18:11.308 fused_ordering(900) 00:18:11.308 fused_ordering(901) 00:18:11.308 fused_ordering(902) 00:18:11.308 fused_ordering(903) 00:18:11.308 fused_ordering(904) 00:18:11.308 fused_ordering(905) 00:18:11.308 fused_ordering(906) 00:18:11.308 fused_ordering(907) 00:18:11.308 fused_ordering(908) 00:18:11.308 fused_ordering(909) 00:18:11.308 fused_ordering(910) 00:18:11.308 fused_ordering(911) 00:18:11.308 fused_ordering(912) 00:18:11.308 fused_ordering(913) 00:18:11.308 fused_ordering(914) 00:18:11.308 fused_ordering(915) 00:18:11.308 fused_ordering(916) 00:18:11.308 fused_ordering(917) 00:18:11.308 fused_ordering(918) 00:18:11.308 fused_ordering(919) 00:18:11.308 fused_ordering(920) 00:18:11.308 fused_ordering(921) 00:18:11.308 fused_ordering(922) 00:18:11.308 fused_ordering(923) 00:18:11.308 fused_ordering(924) 00:18:11.308 fused_ordering(925) 00:18:11.308 fused_ordering(926) 00:18:11.308 fused_ordering(927) 00:18:11.308 fused_ordering(928) 00:18:11.308 fused_ordering(929) 00:18:11.308 fused_ordering(930) 00:18:11.308 fused_ordering(931) 00:18:11.308 fused_ordering(932) 00:18:11.308 fused_ordering(933) 00:18:11.308 fused_ordering(934) 00:18:11.308 fused_ordering(935) 00:18:11.308 fused_ordering(936) 00:18:11.308 fused_ordering(937) 00:18:11.308 fused_ordering(938) 00:18:11.308 fused_ordering(939) 00:18:11.308 fused_ordering(940) 00:18:11.308 fused_ordering(941) 00:18:11.308 fused_ordering(942) 00:18:11.309 fused_ordering(943) 00:18:11.309 fused_ordering(944) 00:18:11.309 fused_ordering(945) 00:18:11.309 fused_ordering(946) 00:18:11.309 fused_ordering(947) 00:18:11.309 fused_ordering(948) 00:18:11.309 fused_ordering(949) 00:18:11.309 fused_ordering(950) 00:18:11.309 fused_ordering(951) 00:18:11.309 fused_ordering(952) 00:18:11.309 fused_ordering(953) 00:18:11.309 fused_ordering(954) 00:18:11.309 fused_ordering(955) 00:18:11.309 fused_ordering(956) 00:18:11.309 fused_ordering(957) 00:18:11.309 fused_ordering(958) 00:18:11.309 fused_ordering(959) 00:18:11.309 fused_ordering(960) 00:18:11.309 fused_ordering(961) 00:18:11.309 fused_ordering(962) 00:18:11.309 fused_ordering(963) 00:18:11.309 fused_ordering(964) 00:18:11.309 fused_ordering(965) 00:18:11.309 fused_ordering(966) 00:18:11.309 fused_ordering(967) 00:18:11.309 fused_ordering(968) 00:18:11.309 fused_ordering(969) 00:18:11.309 fused_ordering(970) 00:18:11.309 fused_ordering(971) 00:18:11.309 fused_ordering(972) 00:18:11.309 fused_ordering(973) 00:18:11.309 fused_ordering(974) 00:18:11.309 fused_ordering(975) 00:18:11.309 fused_ordering(976) 00:18:11.309 fused_ordering(977) 00:18:11.309 fused_ordering(978) 00:18:11.309 fused_ordering(979) 00:18:11.309 fused_ordering(980) 00:18:11.309 fused_ordering(981) 00:18:11.309 fused_ordering(982) 00:18:11.309 fused_ordering(983) 00:18:11.309 fused_ordering(984) 00:18:11.309 fused_ordering(985) 00:18:11.309 fused_ordering(986) 00:18:11.309 fused_ordering(987) 00:18:11.309 fused_ordering(988) 00:18:11.309 fused_ordering(989) 00:18:11.309 fused_ordering(990) 00:18:11.309 fused_ordering(991) 00:18:11.309 fused_ordering(992) 00:18:11.309 fused_ordering(993) 00:18:11.309 fused_ordering(994) 00:18:11.309 fused_ordering(995) 00:18:11.309 fused_ordering(996) 00:18:11.309 fused_ordering(997) 00:18:11.309 fused_ordering(998) 00:18:11.309 fused_ordering(999) 00:18:11.309 fused_ordering(1000) 00:18:11.309 fused_ordering(1001) 00:18:11.309 fused_ordering(1002) 00:18:11.309 fused_ordering(1003) 00:18:11.309 fused_ordering(1004) 00:18:11.309 fused_ordering(1005) 00:18:11.309 fused_ordering(1006) 00:18:11.309 fused_ordering(1007) 00:18:11.309 fused_ordering(1008) 00:18:11.309 fused_ordering(1009) 00:18:11.309 fused_ordering(1010) 00:18:11.309 fused_ordering(1011) 00:18:11.309 fused_ordering(1012) 00:18:11.309 fused_ordering(1013) 00:18:11.309 fused_ordering(1014) 00:18:11.309 fused_ordering(1015) 00:18:11.309 fused_ordering(1016) 00:18:11.309 fused_ordering(1017) 00:18:11.309 fused_ordering(1018) 00:18:11.309 fused_ordering(1019) 00:18:11.309 fused_ordering(1020) 00:18:11.309 fused_ordering(1021) 00:18:11.309 fused_ordering(1022) 00:18:11.309 fused_ordering(1023) 00:18:11.309 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:11.309 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:11.309 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:11.309 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:11.309 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:11.309 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:11.309 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:11.309 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:11.567 rmmod nvme_tcp 00:18:11.567 rmmod nvme_fabrics 00:18:11.567 rmmod nvme_keyring 00:18:11.567 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:11.567 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:11.567 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:11.567 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2952551 ']' 00:18:11.567 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2952551 00:18:11.567 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2952551 ']' 00:18:11.567 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2952551 00:18:11.567 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:18:11.567 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.567 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2952551 00:18:11.567 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:11.567 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:11.567 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2952551' 00:18:11.567 killing process with pid 2952551 00:18:11.567 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2952551 00:18:11.567 00:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2952551 00:18:12.943 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:12.943 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:12.943 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:12.943 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:12.943 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:12.943 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:12.943 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:12.943 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:12.943 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:12.943 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.943 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:12.943 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.847 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:14.847 00:18:14.847 real 0m10.206s 00:18:14.847 user 0m8.356s 00:18:14.847 sys 0m3.706s 00:18:14.847 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:14.847 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:14.847 ************************************ 00:18:14.847 END TEST nvmf_fused_ordering 00:18:14.847 ************************************ 00:18:14.847 00:55:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:14.847 00:55:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:14.847 00:55:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:14.847 00:55:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:14.847 ************************************ 00:18:14.847 START TEST nvmf_ns_masking 00:18:14.847 ************************************ 00:18:14.847 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:14.847 * Looking for test storage... 00:18:14.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:14.847 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:14.847 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:18:14.847 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:15.106 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:15.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.107 --rc genhtml_branch_coverage=1 00:18:15.107 --rc genhtml_function_coverage=1 00:18:15.107 --rc genhtml_legend=1 00:18:15.107 --rc geninfo_all_blocks=1 00:18:15.107 --rc geninfo_unexecuted_blocks=1 00:18:15.107 00:18:15.107 ' 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:15.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.107 --rc genhtml_branch_coverage=1 00:18:15.107 --rc genhtml_function_coverage=1 00:18:15.107 --rc genhtml_legend=1 00:18:15.107 --rc geninfo_all_blocks=1 00:18:15.107 --rc geninfo_unexecuted_blocks=1 00:18:15.107 00:18:15.107 ' 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:15.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.107 --rc genhtml_branch_coverage=1 00:18:15.107 --rc genhtml_function_coverage=1 00:18:15.107 --rc genhtml_legend=1 00:18:15.107 --rc geninfo_all_blocks=1 00:18:15.107 --rc geninfo_unexecuted_blocks=1 00:18:15.107 00:18:15.107 ' 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:15.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.107 --rc genhtml_branch_coverage=1 00:18:15.107 --rc genhtml_function_coverage=1 00:18:15.107 --rc genhtml_legend=1 00:18:15.107 --rc geninfo_all_blocks=1 00:18:15.107 --rc geninfo_unexecuted_blocks=1 00:18:15.107 00:18:15.107 ' 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:15.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:15.107 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=ce569dd3-85e3-40bd-b78c-7ad187cb0070 00:18:15.108 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:15.108 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c4fe004d-3dc6-478b-a48f-2298a1760f12 00:18:15.108 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:15.108 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:15.108 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:15.108 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:15.108 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c4d167b4-f95b-44d8-b047-d149c048c1b8 00:18:15.108 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:15.108 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:15.108 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.108 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:15.108 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:15.108 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:15.108 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.108 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:15.108 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.108 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:15.108 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:15.108 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:15.108 00:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:17.012 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:17.012 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:17.012 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:17.012 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:17.012 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:17.272 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:17.272 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:17.272 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:17.272 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:17.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:18:17.273 00:18:17.273 --- 10.0.0.2 ping statistics --- 00:18:17.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.273 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:17.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:18:17.273 00:18:17.273 --- 10.0.0.1 ping statistics --- 00:18:17.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.273 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2955173 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2955173 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2955173 ']' 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.273 00:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:17.273 [2024-12-06 00:55:49.942844] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:18:17.273 [2024-12-06 00:55:49.942987] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.531 [2024-12-06 00:55:50.102714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.789 [2024-12-06 00:55:50.243847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.789 [2024-12-06 00:55:50.243921] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.789 [2024-12-06 00:55:50.243947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.789 [2024-12-06 00:55:50.243979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.789 [2024-12-06 00:55:50.243999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.789 [2024-12-06 00:55:50.245628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.356 00:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.356 00:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:18.356 00:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:18.356 00:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:18.356 00:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:18.356 00:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.356 00:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:18.614 [2024-12-06 00:55:51.281621] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.614 00:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:18.614 00:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:18.614 00:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:19.183 Malloc1 00:18:19.183 00:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:19.442 Malloc2 00:18:19.442 00:55:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:19.700 00:55:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:20.267 00:55:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.526 [2024-12-06 00:55:53.015604] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.526 00:55:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:20.526 00:55:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c4d167b4-f95b-44d8-b047-d149c048c1b8 -a 10.0.0.2 -s 4420 -i 4 00:18:20.785 00:55:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:20.785 00:55:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:20.785 00:55:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:20.785 00:55:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:20.785 00:55:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:22.685 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:22.685 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:22.685 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:22.685 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:22.685 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:22.685 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:22.685 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:22.685 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:22.685 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:22.685 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:22.685 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:22.685 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:22.685 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:22.685 [ 0]:0x1 00:18:22.685 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:22.685 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:22.941 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=882a47a873de4197a66ce3f8d7f6a10e 00:18:22.941 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 882a47a873de4197a66ce3f8d7f6a10e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:22.941 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:23.198 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:23.198 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:23.198 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:23.198 [ 0]:0x1 00:18:23.198 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:23.198 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:23.198 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=882a47a873de4197a66ce3f8d7f6a10e 00:18:23.198 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 882a47a873de4197a66ce3f8d7f6a10e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:23.198 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:23.198 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:23.198 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:23.198 [ 1]:0x2 00:18:23.198 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:23.198 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:23.198 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=17f2d38453d0423c9e7486a38c3c6826 00:18:23.198 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 17f2d38453d0423c9e7486a38c3c6826 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:23.198 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:23.198 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:23.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:23.198 00:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:23.763 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:24.019 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:24.019 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c4d167b4-f95b-44d8-b047-d149c048c1b8 -a 10.0.0.2 -s 4420 -i 4 00:18:24.276 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:24.276 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:24.276 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:24.276 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:24.276 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:24.276 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:26.194 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:26.452 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:26.452 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:26.452 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:26.452 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.452 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.452 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.452 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:26.452 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:26.452 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:26.452 [ 0]:0x2 00:18:26.452 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:26.452 00:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:26.452 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=17f2d38453d0423c9e7486a38c3c6826 00:18:26.452 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 17f2d38453d0423c9e7486a38c3c6826 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:26.452 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:26.710 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:26.710 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:26.710 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:26.710 [ 0]:0x1 00:18:26.710 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:26.710 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:26.966 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=882a47a873de4197a66ce3f8d7f6a10e 00:18:26.967 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 882a47a873de4197a66ce3f8d7f6a10e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:26.967 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:26.967 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:26.967 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:26.967 [ 1]:0x2 00:18:26.967 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:26.967 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:26.967 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=17f2d38453d0423c9e7486a38c3c6826 00:18:26.967 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 17f2d38453d0423c9e7486a38c3c6826 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:26.967 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:27.223 [ 0]:0x2 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:27.223 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:27.480 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=17f2d38453d0423c9e7486a38c3c6826 00:18:27.480 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 17f2d38453d0423c9e7486a38c3c6826 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.480 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:27.480 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:27.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:27.480 00:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:27.737 00:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:27.737 00:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c4d167b4-f95b-44d8-b047-d149c048c1b8 -a 10.0.0.2 -s 4420 -i 4 00:18:27.737 00:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:27.737 00:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:27.737 00:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:27.737 00:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:27.737 00:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:27.737 00:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:30.265 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:30.265 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:30.265 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:30.265 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:30.265 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:30.265 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:30.265 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:30.265 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:30.265 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:30.265 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:30.265 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:30.265 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:30.265 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:30.265 [ 0]:0x1 00:18:30.265 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:30.265 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:30.265 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=882a47a873de4197a66ce3f8d7f6a10e 00:18:30.265 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 882a47a873de4197a66ce3f8d7f6a10e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:30.265 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:30.265 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:30.265 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:30.265 [ 1]:0x2 00:18:30.265 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=17f2d38453d0423c9e7486a38c3c6826 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 17f2d38453d0423c9e7486a38c3c6826 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:30.266 [ 0]:0x2 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=17f2d38453d0423c9e7486a38c3c6826 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 17f2d38453d0423c9e7486a38c3c6826 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.266 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:30.524 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.524 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:30.524 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:30.524 00:56:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:30.524 [2024-12-06 00:56:03.212765] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:30.524 request: 00:18:30.524 { 00:18:30.524 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.524 "nsid": 2, 00:18:30.524 "host": "nqn.2016-06.io.spdk:host1", 00:18:30.524 "method": "nvmf_ns_remove_host", 00:18:30.524 "req_id": 1 00:18:30.524 } 00:18:30.524 Got JSON-RPC error response 00:18:30.524 response: 00:18:30.524 { 00:18:30.524 "code": -32602, 00:18:30.524 "message": "Invalid parameters" 00:18:30.524 } 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:30.783 [ 0]:0x2 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=17f2d38453d0423c9e7486a38c3c6826 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 17f2d38453d0423c9e7486a38c3c6826 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:30.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2956931 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2956931 /var/tmp/host.sock 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2956931 ']' 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:30.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.783 00:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:30.783 [2024-12-06 00:56:03.466633] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:18:30.783 [2024-12-06 00:56:03.466765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2956931 ] 00:18:31.042 [2024-12-06 00:56:03.599225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.042 [2024-12-06 00:56:03.721944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.977 00:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.977 00:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:31.977 00:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:32.543 00:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:32.801 00:56:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid ce569dd3-85e3-40bd-b78c-7ad187cb0070 00:18:32.801 00:56:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:32.801 00:56:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g CE569DD385E340BDB78C7AD187CB0070 -i 00:18:33.058 00:56:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c4fe004d-3dc6-478b-a48f-2298a1760f12 00:18:33.058 00:56:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:33.058 00:56:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C4FE004D3DC6478BA48F2298A1760F12 -i 00:18:33.315 00:56:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:33.573 00:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:33.832 00:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:33.832 00:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:34.091 nvme0n1 00:18:34.091 00:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:34.091 00:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:34.659 nvme1n2 00:18:34.659 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:34.659 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:34.659 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:34.659 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:34.659 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:34.917 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:34.917 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:34.917 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:34.917 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:35.175 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ ce569dd3-85e3-40bd-b78c-7ad187cb0070 == \c\e\5\6\9\d\d\3\-\8\5\e\3\-\4\0\b\d\-\b\7\8\c\-\7\a\d\1\8\7\c\b\0\0\7\0 ]] 00:18:35.175 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:35.175 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:35.175 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:35.434 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c4fe004d-3dc6-478b-a48f-2298a1760f12 == \c\4\f\e\0\0\4\d\-\3\d\c\6\-\4\7\8\b\-\a\4\8\f\-\2\2\9\8\a\1\7\6\0\f\1\2 ]] 00:18:35.434 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:35.746 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:36.051 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid ce569dd3-85e3-40bd-b78c-7ad187cb0070 00:18:36.051 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:36.051 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g CE569DD385E340BDB78C7AD187CB0070 00:18:36.051 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:36.051 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g CE569DD385E340BDB78C7AD187CB0070 00:18:36.051 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:36.051 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.051 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:36.051 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.051 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:36.051 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.051 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:36.051 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:36.051 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g CE569DD385E340BDB78C7AD187CB0070 00:18:36.051 [2024-12-06 00:56:08.739052] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:36.051 [2024-12-06 00:56:08.739120] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:36.051 [2024-12-06 00:56:08.739155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.321 request: 00:18:36.321 { 00:18:36.321 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.321 "namespace": { 00:18:36.321 "bdev_name": "invalid", 00:18:36.321 "nsid": 1, 00:18:36.321 "nguid": "CE569DD385E340BDB78C7AD187CB0070", 00:18:36.321 "no_auto_visible": false, 00:18:36.321 "hide_metadata": false 00:18:36.321 }, 00:18:36.321 "method": "nvmf_subsystem_add_ns", 00:18:36.321 "req_id": 1 00:18:36.321 } 00:18:36.321 Got JSON-RPC error response 00:18:36.321 response: 00:18:36.321 { 00:18:36.321 "code": -32602, 00:18:36.321 "message": "Invalid parameters" 00:18:36.321 } 00:18:36.321 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:36.321 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:36.321 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:36.321 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:36.321 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid ce569dd3-85e3-40bd-b78c-7ad187cb0070 00:18:36.321 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:36.321 00:56:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g CE569DD385E340BDB78C7AD187CB0070 -i 00:18:36.321 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:38.850 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:38.850 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:38.850 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:38.850 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:38.850 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2956931 00:18:38.850 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2956931 ']' 00:18:38.850 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2956931 00:18:38.850 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:38.850 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.850 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2956931 00:18:38.850 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:38.850 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:38.850 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2956931' 00:18:38.850 killing process with pid 2956931 00:18:38.850 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2956931 00:18:38.850 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2956931 00:18:41.381 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:41.381 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:41.381 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:41.381 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:41.381 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:41.381 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:41.381 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:41.381 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:41.381 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:41.381 rmmod nvme_tcp 00:18:41.381 rmmod nvme_fabrics 00:18:41.381 rmmod nvme_keyring 00:18:41.381 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:41.381 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:41.381 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:41.381 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2955173 ']' 00:18:41.381 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2955173 00:18:41.381 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2955173 ']' 00:18:41.381 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2955173 00:18:41.381 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:41.381 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.381 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2955173 00:18:41.381 00:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.381 00:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.381 00:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2955173' 00:18:41.381 killing process with pid 2955173 00:18:41.381 00:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2955173 00:18:41.381 00:56:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2955173 00:18:43.282 00:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:43.282 00:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:43.282 00:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:43.282 00:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:43.282 00:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:43.282 00:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:43.282 00:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:43.282 00:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:43.282 00:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:43.282 00:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.282 00:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.282 00:56:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:45.185 00:18:45.185 real 0m30.227s 00:18:45.185 user 0m44.618s 00:18:45.185 sys 0m5.031s 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:45.185 ************************************ 00:18:45.185 END TEST nvmf_ns_masking 00:18:45.185 ************************************ 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:45.185 ************************************ 00:18:45.185 START TEST nvmf_nvme_cli 00:18:45.185 ************************************ 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:45.185 * Looking for test storage... 00:18:45.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:45.185 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:45.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.186 --rc genhtml_branch_coverage=1 00:18:45.186 --rc genhtml_function_coverage=1 00:18:45.186 --rc genhtml_legend=1 00:18:45.186 --rc geninfo_all_blocks=1 00:18:45.186 --rc geninfo_unexecuted_blocks=1 00:18:45.186 00:18:45.186 ' 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:45.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.186 --rc genhtml_branch_coverage=1 00:18:45.186 --rc genhtml_function_coverage=1 00:18:45.186 --rc genhtml_legend=1 00:18:45.186 --rc geninfo_all_blocks=1 00:18:45.186 --rc geninfo_unexecuted_blocks=1 00:18:45.186 00:18:45.186 ' 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:45.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.186 --rc genhtml_branch_coverage=1 00:18:45.186 --rc genhtml_function_coverage=1 00:18:45.186 --rc genhtml_legend=1 00:18:45.186 --rc geninfo_all_blocks=1 00:18:45.186 --rc geninfo_unexecuted_blocks=1 00:18:45.186 00:18:45.186 ' 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:45.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.186 --rc genhtml_branch_coverage=1 00:18:45.186 --rc genhtml_function_coverage=1 00:18:45.186 --rc genhtml_legend=1 00:18:45.186 --rc geninfo_all_blocks=1 00:18:45.186 --rc geninfo_unexecuted_blocks=1 00:18:45.186 00:18:45.186 ' 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:45.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:45.186 00:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:47.716 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:47.716 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:47.716 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.716 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:47.717 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:47.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:18:47.717 00:18:47.717 --- 10.0.0.2 ping statistics --- 00:18:47.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.717 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:47.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:18:47.717 00:18:47.717 --- 10.0.0.1 ping statistics --- 00:18:47.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.717 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:47.717 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:47.717 00:56:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:47.717 00:56:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:47.717 00:56:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:47.717 00:56:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.717 00:56:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2960355 00:18:47.717 00:56:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:47.717 00:56:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2960355 00:18:47.717 00:56:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2960355 ']' 00:18:47.717 00:56:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.717 00:56:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.717 00:56:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.717 00:56:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.717 00:56:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.717 [2024-12-06 00:56:20.109481] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:18:47.717 [2024-12-06 00:56:20.109615] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.717 [2024-12-06 00:56:20.257687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:47.717 [2024-12-06 00:56:20.391248] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.717 [2024-12-06 00:56:20.391340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.717 [2024-12-06 00:56:20.391366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.717 [2024-12-06 00:56:20.391391] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.717 [2024-12-06 00:56:20.391411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.717 [2024-12-06 00:56:20.394301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.717 [2024-12-06 00:56:20.394373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.717 [2024-12-06 00:56:20.394461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.717 [2024-12-06 00:56:20.394468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.651 [2024-12-06 00:56:21.141376] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.651 Malloc0 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.651 Malloc1 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.651 [2024-12-06 00:56:21.341411] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.651 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:18:48.909 00:18:48.909 Discovery Log Number of Records 2, Generation counter 2 00:18:48.909 =====Discovery Log Entry 0====== 00:18:48.909 trtype: tcp 00:18:48.909 adrfam: ipv4 00:18:48.909 subtype: current discovery subsystem 00:18:48.909 treq: not required 00:18:48.909 portid: 0 00:18:48.909 trsvcid: 4420 00:18:48.909 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:48.909 traddr: 10.0.0.2 00:18:48.909 eflags: explicit discovery connections, duplicate discovery information 00:18:48.909 sectype: none 00:18:48.909 =====Discovery Log Entry 1====== 00:18:48.909 trtype: tcp 00:18:48.909 adrfam: ipv4 00:18:48.909 subtype: nvme subsystem 00:18:48.909 treq: not required 00:18:48.909 portid: 0 00:18:48.909 trsvcid: 4420 00:18:48.909 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:48.909 traddr: 10.0.0.2 00:18:48.909 eflags: none 00:18:48.909 sectype: none 00:18:48.909 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:48.909 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:48.909 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:48.909 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:48.909 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:48.909 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:48.909 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:48.909 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:48.909 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:48.909 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:48.909 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:49.475 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:49.475 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:49.475 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:49.475 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:49.475 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:49.475 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:52.003 /dev/nvme0n2 ]] 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:52.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:52.003 rmmod nvme_tcp 00:18:52.003 rmmod nvme_fabrics 00:18:52.003 rmmod nvme_keyring 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2960355 ']' 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2960355 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2960355 ']' 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2960355 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2960355 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:52.003 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:52.004 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2960355' 00:18:52.004 killing process with pid 2960355 00:18:52.004 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2960355 00:18:52.004 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2960355 00:18:53.376 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:53.376 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:53.376 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:53.376 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:53.376 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:53.376 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:53.376 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:53.376 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:53.376 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:53.376 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.376 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:53.376 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:55.904 00:18:55.904 real 0m10.397s 00:18:55.904 user 0m22.124s 00:18:55.904 sys 0m2.473s 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:55.904 ************************************ 00:18:55.904 END TEST nvmf_nvme_cli 00:18:55.904 ************************************ 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:55.904 ************************************ 00:18:55.904 START TEST nvmf_auth_target 00:18:55.904 ************************************ 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:55.904 * Looking for test storage... 00:18:55.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:55.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.904 --rc genhtml_branch_coverage=1 00:18:55.904 --rc genhtml_function_coverage=1 00:18:55.904 --rc genhtml_legend=1 00:18:55.904 --rc geninfo_all_blocks=1 00:18:55.904 --rc geninfo_unexecuted_blocks=1 00:18:55.904 00:18:55.904 ' 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:55.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.904 --rc genhtml_branch_coverage=1 00:18:55.904 --rc genhtml_function_coverage=1 00:18:55.904 --rc genhtml_legend=1 00:18:55.904 --rc geninfo_all_blocks=1 00:18:55.904 --rc geninfo_unexecuted_blocks=1 00:18:55.904 00:18:55.904 ' 00:18:55.904 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:55.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.904 --rc genhtml_branch_coverage=1 00:18:55.904 --rc genhtml_function_coverage=1 00:18:55.904 --rc genhtml_legend=1 00:18:55.904 --rc geninfo_all_blocks=1 00:18:55.904 --rc geninfo_unexecuted_blocks=1 00:18:55.905 00:18:55.905 ' 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:55.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.905 --rc genhtml_branch_coverage=1 00:18:55.905 --rc genhtml_function_coverage=1 00:18:55.905 --rc genhtml_legend=1 00:18:55.905 --rc geninfo_all_blocks=1 00:18:55.905 --rc geninfo_unexecuted_blocks=1 00:18:55.905 00:18:55.905 ' 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:55.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:55.905 00:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:57.806 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:57.806 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:57.806 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:57.806 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:57.806 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:57.807 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:57.807 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:57.807 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:57.807 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:58.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:18:58.065 00:18:58.065 --- 10.0.0.2 ping statistics --- 00:18:58.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.065 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:58.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:18:58.065 00:18:58.065 --- 10.0.0.1 ping statistics --- 00:18:58.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.065 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2963008 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2963008 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2963008 ']' 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.065 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2963166 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fe921a00768a67e4ac7d1aef04ff8ab7f7008e02242ee3c4 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Fpo 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fe921a00768a67e4ac7d1aef04ff8ab7f7008e02242ee3c4 0 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fe921a00768a67e4ac7d1aef04ff8ab7f7008e02242ee3c4 0 00:18:58.999 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:59.000 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:59.000 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fe921a00768a67e4ac7d1aef04ff8ab7f7008e02242ee3c4 00:18:59.000 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:18:59.000 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Fpo 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Fpo 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Fpo 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c7bec7250f44646d202bee1472ac7bf6570ac501bb73197ccf8a961913c8bc0e 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.28p 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c7bec7250f44646d202bee1472ac7bf6570ac501bb73197ccf8a961913c8bc0e 3 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c7bec7250f44646d202bee1472ac7bf6570ac501bb73197ccf8a961913c8bc0e 3 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c7bec7250f44646d202bee1472ac7bf6570ac501bb73197ccf8a961913c8bc0e 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.28p 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.28p 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.28p 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:59.258 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cc517a69c3e30ed9c9ba451936c57dfe 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.E9P 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cc517a69c3e30ed9c9ba451936c57dfe 1 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cc517a69c3e30ed9c9ba451936c57dfe 1 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cc517a69c3e30ed9c9ba451936c57dfe 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.E9P 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.E9P 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.E9P 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6d3d6a5a118098e9418677f63a23c3dafed4a71286213790 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.cuC 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6d3d6a5a118098e9418677f63a23c3dafed4a71286213790 2 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6d3d6a5a118098e9418677f63a23c3dafed4a71286213790 2 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6d3d6a5a118098e9418677f63a23c3dafed4a71286213790 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.cuC 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.cuC 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.cuC 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=43f517b6cf2eb245482a7355ca63035062002d3ef946a2ee 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.qHr 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 43f517b6cf2eb245482a7355ca63035062002d3ef946a2ee 2 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 43f517b6cf2eb245482a7355ca63035062002d3ef946a2ee 2 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=43f517b6cf2eb245482a7355ca63035062002d3ef946a2ee 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.qHr 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.qHr 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.qHr 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=00039a54ad17ea7a3a62ab7495269fcf 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.jpR 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 00039a54ad17ea7a3a62ab7495269fcf 1 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 00039a54ad17ea7a3a62ab7495269fcf 1 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=00039a54ad17ea7a3a62ab7495269fcf 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:59.259 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:59.518 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.jpR 00:18:59.518 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.jpR 00:18:59.518 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.jpR 00:18:59.518 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:59.518 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:59.518 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:59.518 00:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2302415e11fd181fe1356ce97b2fc7107aa2c7018e82e50f4fedd8e81d19362c 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.isS 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2302415e11fd181fe1356ce97b2fc7107aa2c7018e82e50f4fedd8e81d19362c 3 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2302415e11fd181fe1356ce97b2fc7107aa2c7018e82e50f4fedd8e81d19362c 3 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2302415e11fd181fe1356ce97b2fc7107aa2c7018e82e50f4fedd8e81d19362c 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.isS 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.isS 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.isS 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2963008 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2963008 ']' 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.518 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.777 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.777 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:59.777 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2963166 /var/tmp/host.sock 00:18:59.777 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2963166 ']' 00:18:59.777 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:59.777 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.777 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:59.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:59.777 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.777 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.343 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.343 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:00.343 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:00.343 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.343 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.343 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.343 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:00.343 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Fpo 00:19:00.343 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.343 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.343 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.343 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Fpo 00:19:00.343 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Fpo 00:19:00.908 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.28p ]] 00:19:00.908 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.28p 00:19:00.908 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.908 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.908 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.908 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.28p 00:19:00.908 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.28p 00:19:00.908 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:00.908 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.E9P 00:19:00.908 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.908 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.908 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.908 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.E9P 00:19:00.908 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.E9P 00:19:01.475 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.cuC ]] 00:19:01.475 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cuC 00:19:01.475 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.475 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.475 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.475 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cuC 00:19:01.475 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cuC 00:19:01.475 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:01.475 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.qHr 00:19:01.475 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.475 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.475 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.475 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.qHr 00:19:01.475 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.qHr 00:19:01.733 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.jpR ]] 00:19:01.733 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jpR 00:19:01.733 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.733 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.990 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.990 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jpR 00:19:01.990 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jpR 00:19:02.249 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:02.249 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.isS 00:19:02.249 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.249 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.249 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.249 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.isS 00:19:02.249 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.isS 00:19:02.507 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:02.507 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:02.507 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:02.507 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.507 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:02.507 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:02.764 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:02.764 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.764 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:02.764 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:02.764 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:02.764 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.764 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.764 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.764 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.764 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.764 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.764 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.764 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.021 00:19:03.021 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.021 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.021 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.278 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.278 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.278 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.278 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.278 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.278 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.278 { 00:19:03.278 "cntlid": 1, 00:19:03.278 "qid": 0, 00:19:03.278 "state": "enabled", 00:19:03.278 "thread": "nvmf_tgt_poll_group_000", 00:19:03.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:03.278 "listen_address": { 00:19:03.278 "trtype": "TCP", 00:19:03.278 "adrfam": "IPv4", 00:19:03.278 "traddr": "10.0.0.2", 00:19:03.278 "trsvcid": "4420" 00:19:03.278 }, 00:19:03.278 "peer_address": { 00:19:03.278 "trtype": "TCP", 00:19:03.278 "adrfam": "IPv4", 00:19:03.278 "traddr": "10.0.0.1", 00:19:03.278 "trsvcid": "40884" 00:19:03.278 }, 00:19:03.278 "auth": { 00:19:03.278 "state": "completed", 00:19:03.278 "digest": "sha256", 00:19:03.278 "dhgroup": "null" 00:19:03.278 } 00:19:03.278 } 00:19:03.278 ]' 00:19:03.278 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.278 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.278 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.536 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:03.536 00:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.536 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.536 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.536 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.793 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:19:03.793 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:19:04.726 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.726 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.726 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.726 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.726 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.726 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.726 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:04.726 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:04.984 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:04.984 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.984 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:04.984 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:04.984 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:04.984 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.984 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.984 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.984 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.984 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.984 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.984 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.984 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.241 00:19:05.241 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.242 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.242 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.500 00:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.500 00:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.500 00:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.500 00:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.500 00:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.500 00:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.500 { 00:19:05.500 "cntlid": 3, 00:19:05.500 "qid": 0, 00:19:05.500 "state": "enabled", 00:19:05.500 "thread": "nvmf_tgt_poll_group_000", 00:19:05.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:05.500 "listen_address": { 00:19:05.500 "trtype": "TCP", 00:19:05.500 "adrfam": "IPv4", 00:19:05.500 "traddr": "10.0.0.2", 00:19:05.500 "trsvcid": "4420" 00:19:05.500 }, 00:19:05.500 "peer_address": { 00:19:05.500 "trtype": "TCP", 00:19:05.500 "adrfam": "IPv4", 00:19:05.500 "traddr": "10.0.0.1", 00:19:05.500 "trsvcid": "40898" 00:19:05.500 }, 00:19:05.500 "auth": { 00:19:05.500 "state": "completed", 00:19:05.500 "digest": "sha256", 00:19:05.500 "dhgroup": "null" 00:19:05.500 } 00:19:05.500 } 00:19:05.500 ]' 00:19:05.500 00:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.759 00:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.759 00:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.759 00:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:05.759 00:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.759 00:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.759 00:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.759 00:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.017 00:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:19:06.017 00:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:19:06.949 00:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.949 00:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.949 00:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.949 00:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.949 00:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.949 00:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.949 00:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:06.949 00:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:07.205 00:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:07.205 00:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.205 00:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:07.205 00:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:07.205 00:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:07.205 00:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.205 00:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.205 00:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.205 00:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.205 00:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.205 00:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.205 00:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.206 00:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.770 00:19:07.770 00:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.770 00:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.770 00:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.029 00:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.029 00:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.029 00:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.029 00:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.029 00:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.029 00:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.029 { 00:19:08.029 "cntlid": 5, 00:19:08.029 "qid": 0, 00:19:08.029 "state": "enabled", 00:19:08.029 "thread": "nvmf_tgt_poll_group_000", 00:19:08.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:08.029 "listen_address": { 00:19:08.029 "trtype": "TCP", 00:19:08.029 "adrfam": "IPv4", 00:19:08.029 "traddr": "10.0.0.2", 00:19:08.029 "trsvcid": "4420" 00:19:08.029 }, 00:19:08.029 "peer_address": { 00:19:08.029 "trtype": "TCP", 00:19:08.029 "adrfam": "IPv4", 00:19:08.029 "traddr": "10.0.0.1", 00:19:08.029 "trsvcid": "38468" 00:19:08.029 }, 00:19:08.029 "auth": { 00:19:08.029 "state": "completed", 00:19:08.029 "digest": "sha256", 00:19:08.029 "dhgroup": "null" 00:19:08.029 } 00:19:08.029 } 00:19:08.029 ]' 00:19:08.029 00:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.029 00:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.029 00:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.029 00:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:08.029 00:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.029 00:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.029 00:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.029 00:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.288 00:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:19:08.288 00:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:19:09.219 00:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.219 00:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.219 00:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.219 00:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.219 00:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.219 00:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.219 00:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:09.219 00:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:09.476 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:09.476 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.476 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:09.476 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:09.476 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:09.476 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.476 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:09.476 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.476 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.476 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.476 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:09.476 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:09.476 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:10.042 00:19:10.042 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.042 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.042 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.300 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.300 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.300 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.300 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.300 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.300 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.300 { 00:19:10.300 "cntlid": 7, 00:19:10.300 "qid": 0, 00:19:10.300 "state": "enabled", 00:19:10.300 "thread": "nvmf_tgt_poll_group_000", 00:19:10.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:10.300 "listen_address": { 00:19:10.300 "trtype": "TCP", 00:19:10.301 "adrfam": "IPv4", 00:19:10.301 "traddr": "10.0.0.2", 00:19:10.301 "trsvcid": "4420" 00:19:10.301 }, 00:19:10.301 "peer_address": { 00:19:10.301 "trtype": "TCP", 00:19:10.301 "adrfam": "IPv4", 00:19:10.301 "traddr": "10.0.0.1", 00:19:10.301 "trsvcid": "38498" 00:19:10.301 }, 00:19:10.301 "auth": { 00:19:10.301 "state": "completed", 00:19:10.301 "digest": "sha256", 00:19:10.301 "dhgroup": "null" 00:19:10.301 } 00:19:10.301 } 00:19:10.301 ]' 00:19:10.301 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.301 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.301 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.301 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:10.301 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.301 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.301 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.301 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.558 00:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:19:10.559 00:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:19:11.552 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.552 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:11.552 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.552 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.552 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.552 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.552 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.552 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:11.552 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:11.847 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:11.847 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.847 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:11.847 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:11.847 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:11.847 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.847 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.847 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.847 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.847 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.847 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.847 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.847 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.105 00:19:12.105 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.105 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.105 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.363 00:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.363 00:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.363 00:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.363 00:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.363 00:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.363 00:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.363 { 00:19:12.363 "cntlid": 9, 00:19:12.363 "qid": 0, 00:19:12.363 "state": "enabled", 00:19:12.363 "thread": "nvmf_tgt_poll_group_000", 00:19:12.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:12.363 "listen_address": { 00:19:12.363 "trtype": "TCP", 00:19:12.363 "adrfam": "IPv4", 00:19:12.363 "traddr": "10.0.0.2", 00:19:12.363 "trsvcid": "4420" 00:19:12.363 }, 00:19:12.363 "peer_address": { 00:19:12.363 "trtype": "TCP", 00:19:12.363 "adrfam": "IPv4", 00:19:12.363 "traddr": "10.0.0.1", 00:19:12.363 "trsvcid": "38520" 00:19:12.363 }, 00:19:12.363 "auth": { 00:19:12.363 "state": "completed", 00:19:12.363 "digest": "sha256", 00:19:12.363 "dhgroup": "ffdhe2048" 00:19:12.363 } 00:19:12.363 } 00:19:12.363 ]' 00:19:12.363 00:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.621 00:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.621 00:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.621 00:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:12.621 00:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.621 00:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.621 00:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.621 00:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.879 00:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:19:12.879 00:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:19:13.811 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.811 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.811 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.811 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.811 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.811 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.811 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.811 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.070 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:14.070 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.070 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:14.070 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:14.070 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:14.070 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.070 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.070 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.070 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.070 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.070 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.070 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.070 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.328 00:19:14.328 00:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.328 00:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.328 00:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.585 00:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.585 00:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.585 00:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.585 00:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.843 00:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.843 00:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.843 { 00:19:14.843 "cntlid": 11, 00:19:14.843 "qid": 0, 00:19:14.843 "state": "enabled", 00:19:14.843 "thread": "nvmf_tgt_poll_group_000", 00:19:14.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:14.843 "listen_address": { 00:19:14.843 "trtype": "TCP", 00:19:14.843 "adrfam": "IPv4", 00:19:14.843 "traddr": "10.0.0.2", 00:19:14.843 "trsvcid": "4420" 00:19:14.843 }, 00:19:14.843 "peer_address": { 00:19:14.843 "trtype": "TCP", 00:19:14.843 "adrfam": "IPv4", 00:19:14.843 "traddr": "10.0.0.1", 00:19:14.843 "trsvcid": "38560" 00:19:14.843 }, 00:19:14.843 "auth": { 00:19:14.843 "state": "completed", 00:19:14.843 "digest": "sha256", 00:19:14.843 "dhgroup": "ffdhe2048" 00:19:14.843 } 00:19:14.843 } 00:19:14.843 ]' 00:19:14.843 00:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.843 00:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.843 00:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.843 00:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:14.843 00:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.843 00:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.843 00:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.843 00:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.100 00:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:19:15.101 00:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:19:16.032 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.032 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.032 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.032 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.032 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.032 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.032 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:16.032 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:16.290 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:16.290 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.290 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:16.290 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:16.290 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:16.290 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.290 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.290 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.290 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.290 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.290 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.290 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.290 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.548 00:19:16.548 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.548 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.548 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.114 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.114 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.114 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.114 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.114 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.114 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.114 { 00:19:17.114 "cntlid": 13, 00:19:17.114 "qid": 0, 00:19:17.114 "state": "enabled", 00:19:17.114 "thread": "nvmf_tgt_poll_group_000", 00:19:17.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:17.114 "listen_address": { 00:19:17.114 "trtype": "TCP", 00:19:17.114 "adrfam": "IPv4", 00:19:17.114 "traddr": "10.0.0.2", 00:19:17.114 "trsvcid": "4420" 00:19:17.114 }, 00:19:17.114 "peer_address": { 00:19:17.114 "trtype": "TCP", 00:19:17.114 "adrfam": "IPv4", 00:19:17.114 "traddr": "10.0.0.1", 00:19:17.114 "trsvcid": "38586" 00:19:17.114 }, 00:19:17.114 "auth": { 00:19:17.114 "state": "completed", 00:19:17.114 "digest": "sha256", 00:19:17.114 "dhgroup": "ffdhe2048" 00:19:17.114 } 00:19:17.114 } 00:19:17.114 ]' 00:19:17.114 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.114 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.114 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.114 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:17.114 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.114 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.114 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.114 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.373 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:19:17.373 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:19:18.307 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.307 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.307 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.307 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.307 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.307 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.307 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:18.307 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:18.565 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:18.565 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.565 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:18.565 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:18.565 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:18.565 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.565 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:18.565 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.565 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.565 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.565 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:18.565 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:18.565 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:19.130 00:19:19.130 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.130 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.130 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.388 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.388 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.388 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.388 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.388 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.388 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.388 { 00:19:19.388 "cntlid": 15, 00:19:19.388 "qid": 0, 00:19:19.388 "state": "enabled", 00:19:19.388 "thread": "nvmf_tgt_poll_group_000", 00:19:19.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:19.388 "listen_address": { 00:19:19.388 "trtype": "TCP", 00:19:19.388 "adrfam": "IPv4", 00:19:19.388 "traddr": "10.0.0.2", 00:19:19.388 "trsvcid": "4420" 00:19:19.388 }, 00:19:19.388 "peer_address": { 00:19:19.388 "trtype": "TCP", 00:19:19.388 "adrfam": "IPv4", 00:19:19.388 "traddr": "10.0.0.1", 00:19:19.388 "trsvcid": "40370" 00:19:19.388 }, 00:19:19.388 "auth": { 00:19:19.388 "state": "completed", 00:19:19.388 "digest": "sha256", 00:19:19.388 "dhgroup": "ffdhe2048" 00:19:19.388 } 00:19:19.388 } 00:19:19.388 ]' 00:19:19.388 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.388 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.388 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.388 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:19.388 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.388 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.388 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.388 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.954 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:19:19.954 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:19:20.887 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.887 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.887 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.887 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.887 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.887 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:20.887 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.887 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:20.887 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:20.888 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:20.888 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.888 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:20.888 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:20.888 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:20.888 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.888 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.888 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.888 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.145 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.145 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.145 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.145 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.404 00:19:21.404 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.404 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.404 00:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.663 00:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.663 00:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.663 00:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.663 00:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.663 00:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.663 00:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.663 { 00:19:21.663 "cntlid": 17, 00:19:21.663 "qid": 0, 00:19:21.663 "state": "enabled", 00:19:21.663 "thread": "nvmf_tgt_poll_group_000", 00:19:21.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:21.663 "listen_address": { 00:19:21.663 "trtype": "TCP", 00:19:21.663 "adrfam": "IPv4", 00:19:21.663 "traddr": "10.0.0.2", 00:19:21.663 "trsvcid": "4420" 00:19:21.663 }, 00:19:21.663 "peer_address": { 00:19:21.663 "trtype": "TCP", 00:19:21.663 "adrfam": "IPv4", 00:19:21.663 "traddr": "10.0.0.1", 00:19:21.663 "trsvcid": "40400" 00:19:21.663 }, 00:19:21.663 "auth": { 00:19:21.663 "state": "completed", 00:19:21.663 "digest": "sha256", 00:19:21.663 "dhgroup": "ffdhe3072" 00:19:21.663 } 00:19:21.663 } 00:19:21.663 ]' 00:19:21.663 00:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.663 00:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.663 00:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.663 00:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:21.663 00:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.663 00:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.663 00:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.663 00:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.228 00:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:19:22.228 00:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:19:23.160 00:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.160 00:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.160 00:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.160 00:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.160 00:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.160 00:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.160 00:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:23.160 00:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:23.417 00:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:23.417 00:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.417 00:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:23.417 00:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:23.417 00:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:23.417 00:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.417 00:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.417 00:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.417 00:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.417 00:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.417 00:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.418 00:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.418 00:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.674 00:19:23.674 00:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.674 00:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.675 00:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.932 00:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.932 00:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.932 00:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.932 00:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.932 00:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.932 00:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.932 { 00:19:23.932 "cntlid": 19, 00:19:23.932 "qid": 0, 00:19:23.932 "state": "enabled", 00:19:23.932 "thread": "nvmf_tgt_poll_group_000", 00:19:23.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:23.932 "listen_address": { 00:19:23.932 "trtype": "TCP", 00:19:23.932 "adrfam": "IPv4", 00:19:23.932 "traddr": "10.0.0.2", 00:19:23.932 "trsvcid": "4420" 00:19:23.932 }, 00:19:23.932 "peer_address": { 00:19:23.932 "trtype": "TCP", 00:19:23.932 "adrfam": "IPv4", 00:19:23.932 "traddr": "10.0.0.1", 00:19:23.932 "trsvcid": "40426" 00:19:23.932 }, 00:19:23.932 "auth": { 00:19:23.932 "state": "completed", 00:19:23.932 "digest": "sha256", 00:19:23.932 "dhgroup": "ffdhe3072" 00:19:23.932 } 00:19:23.932 } 00:19:23.932 ]' 00:19:23.932 00:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.932 00:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.932 00:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.190 00:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:24.190 00:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.190 00:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.190 00:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.190 00:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.448 00:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:19:24.448 00:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:19:25.378 00:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.378 00:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.378 00:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.378 00:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.379 00:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.379 00:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.379 00:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:25.379 00:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:25.637 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:25.637 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.637 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:25.637 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:25.637 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:25.637 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.637 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.637 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.637 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.637 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.637 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.637 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.637 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.894 00:19:25.894 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.894 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.894 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.460 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.460 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.460 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.460 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.460 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.460 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.460 { 00:19:26.460 "cntlid": 21, 00:19:26.460 "qid": 0, 00:19:26.460 "state": "enabled", 00:19:26.460 "thread": "nvmf_tgt_poll_group_000", 00:19:26.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:26.460 "listen_address": { 00:19:26.460 "trtype": "TCP", 00:19:26.460 "adrfam": "IPv4", 00:19:26.460 "traddr": "10.0.0.2", 00:19:26.460 "trsvcid": "4420" 00:19:26.460 }, 00:19:26.460 "peer_address": { 00:19:26.460 "trtype": "TCP", 00:19:26.460 "adrfam": "IPv4", 00:19:26.460 "traddr": "10.0.0.1", 00:19:26.460 "trsvcid": "40454" 00:19:26.460 }, 00:19:26.460 "auth": { 00:19:26.460 "state": "completed", 00:19:26.460 "digest": "sha256", 00:19:26.460 "dhgroup": "ffdhe3072" 00:19:26.460 } 00:19:26.460 } 00:19:26.460 ]' 00:19:26.460 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.460 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.460 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.460 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:26.460 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.460 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.460 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.460 00:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.718 00:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:19:26.718 00:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:19:27.651 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.651 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.651 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.651 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.651 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.651 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.651 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:27.651 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:27.910 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:27.910 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.910 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:27.910 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:27.910 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:27.910 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.910 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:27.910 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.910 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.910 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.910 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:27.910 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.910 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:28.475 00:19:28.475 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.475 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.475 00:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.733 00:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.733 00:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.733 00:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.733 00:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.733 00:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.733 00:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.733 { 00:19:28.733 "cntlid": 23, 00:19:28.733 "qid": 0, 00:19:28.733 "state": "enabled", 00:19:28.733 "thread": "nvmf_tgt_poll_group_000", 00:19:28.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:28.734 "listen_address": { 00:19:28.734 "trtype": "TCP", 00:19:28.734 "adrfam": "IPv4", 00:19:28.734 "traddr": "10.0.0.2", 00:19:28.734 "trsvcid": "4420" 00:19:28.734 }, 00:19:28.734 "peer_address": { 00:19:28.734 "trtype": "TCP", 00:19:28.734 "adrfam": "IPv4", 00:19:28.734 "traddr": "10.0.0.1", 00:19:28.734 "trsvcid": "43900" 00:19:28.734 }, 00:19:28.734 "auth": { 00:19:28.734 "state": "completed", 00:19:28.734 "digest": "sha256", 00:19:28.734 "dhgroup": "ffdhe3072" 00:19:28.734 } 00:19:28.734 } 00:19:28.734 ]' 00:19:28.734 00:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.734 00:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.734 00:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.734 00:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:28.734 00:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.734 00:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.734 00:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.734 00:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.992 00:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:19:28.992 00:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:19:29.922 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.922 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.922 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.922 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.922 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.922 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.922 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.922 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:29.923 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:30.179 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:30.179 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.179 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:30.179 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:30.179 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:30.179 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.179 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.179 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.179 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.179 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.179 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.179 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.179 00:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.744 00:19:30.744 00:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.744 00:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.744 00:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.002 00:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.002 00:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.002 00:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.002 00:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.002 00:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.002 00:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.002 { 00:19:31.002 "cntlid": 25, 00:19:31.002 "qid": 0, 00:19:31.002 "state": "enabled", 00:19:31.002 "thread": "nvmf_tgt_poll_group_000", 00:19:31.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:31.002 "listen_address": { 00:19:31.002 "trtype": "TCP", 00:19:31.002 "adrfam": "IPv4", 00:19:31.002 "traddr": "10.0.0.2", 00:19:31.002 "trsvcid": "4420" 00:19:31.002 }, 00:19:31.002 "peer_address": { 00:19:31.002 "trtype": "TCP", 00:19:31.002 "adrfam": "IPv4", 00:19:31.002 "traddr": "10.0.0.1", 00:19:31.002 "trsvcid": "43936" 00:19:31.002 }, 00:19:31.002 "auth": { 00:19:31.002 "state": "completed", 00:19:31.002 "digest": "sha256", 00:19:31.002 "dhgroup": "ffdhe4096" 00:19:31.002 } 00:19:31.002 } 00:19:31.002 ]' 00:19:31.002 00:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.002 00:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.002 00:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.002 00:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:31.002 00:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.002 00:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.002 00:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.002 00:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.261 00:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:19:31.261 00:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:19:32.633 00:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.633 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.633 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.633 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.633 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.633 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.633 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:32.633 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:32.633 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:32.633 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.633 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:32.633 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:32.633 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:32.633 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.633 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.633 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.633 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.633 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.633 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.633 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.633 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.195 00:19:33.196 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.196 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.196 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.453 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.453 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.453 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.453 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.453 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.453 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.453 { 00:19:33.453 "cntlid": 27, 00:19:33.453 "qid": 0, 00:19:33.453 "state": "enabled", 00:19:33.453 "thread": "nvmf_tgt_poll_group_000", 00:19:33.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:33.453 "listen_address": { 00:19:33.453 "trtype": "TCP", 00:19:33.453 "adrfam": "IPv4", 00:19:33.453 "traddr": "10.0.0.2", 00:19:33.453 "trsvcid": "4420" 00:19:33.453 }, 00:19:33.453 "peer_address": { 00:19:33.453 "trtype": "TCP", 00:19:33.453 "adrfam": "IPv4", 00:19:33.453 "traddr": "10.0.0.1", 00:19:33.453 "trsvcid": "43964" 00:19:33.453 }, 00:19:33.453 "auth": { 00:19:33.453 "state": "completed", 00:19:33.453 "digest": "sha256", 00:19:33.453 "dhgroup": "ffdhe4096" 00:19:33.453 } 00:19:33.453 } 00:19:33.453 ]' 00:19:33.453 00:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.453 00:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.453 00:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.453 00:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:33.453 00:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.453 00:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.453 00:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.453 00:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.710 00:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:19:33.710 00:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:19:35.084 00:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.084 00:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.084 00:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.084 00:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.084 00:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.084 00:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.084 00:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:35.084 00:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:35.084 00:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:35.084 00:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.084 00:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:35.084 00:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:35.084 00:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:35.084 00:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.084 00:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.084 00:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.084 00:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.084 00:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.084 00:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.084 00:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.084 00:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.650 00:19:35.650 00:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.650 00:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.650 00:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.909 00:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.909 00:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.909 00:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.909 00:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.909 00:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.909 00:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.909 { 00:19:35.909 "cntlid": 29, 00:19:35.909 "qid": 0, 00:19:35.909 "state": "enabled", 00:19:35.909 "thread": "nvmf_tgt_poll_group_000", 00:19:35.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:35.909 "listen_address": { 00:19:35.909 "trtype": "TCP", 00:19:35.909 "adrfam": "IPv4", 00:19:35.909 "traddr": "10.0.0.2", 00:19:35.909 "trsvcid": "4420" 00:19:35.909 }, 00:19:35.909 "peer_address": { 00:19:35.909 "trtype": "TCP", 00:19:35.909 "adrfam": "IPv4", 00:19:35.909 "traddr": "10.0.0.1", 00:19:35.909 "trsvcid": "43986" 00:19:35.909 }, 00:19:35.909 "auth": { 00:19:35.909 "state": "completed", 00:19:35.909 "digest": "sha256", 00:19:35.909 "dhgroup": "ffdhe4096" 00:19:35.909 } 00:19:35.909 } 00:19:35.909 ]' 00:19:35.909 00:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.909 00:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.909 00:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.909 00:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:35.909 00:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.909 00:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.909 00:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.909 00:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.167 00:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:19:36.167 00:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:19:37.101 00:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.101 00:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.101 00:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.101 00:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.101 00:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.101 00:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.101 00:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:37.101 00:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:37.669 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:37.669 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.669 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.669 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:37.669 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:37.669 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.669 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:37.669 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.669 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.669 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.669 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:37.669 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:37.669 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:37.927 00:19:37.927 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.927 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.927 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.185 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.185 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.185 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.185 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.185 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.185 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.185 { 00:19:38.185 "cntlid": 31, 00:19:38.185 "qid": 0, 00:19:38.185 "state": "enabled", 00:19:38.185 "thread": "nvmf_tgt_poll_group_000", 00:19:38.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:38.185 "listen_address": { 00:19:38.185 "trtype": "TCP", 00:19:38.185 "adrfam": "IPv4", 00:19:38.185 "traddr": "10.0.0.2", 00:19:38.185 "trsvcid": "4420" 00:19:38.185 }, 00:19:38.185 "peer_address": { 00:19:38.185 "trtype": "TCP", 00:19:38.185 "adrfam": "IPv4", 00:19:38.185 "traddr": "10.0.0.1", 00:19:38.185 "trsvcid": "42602" 00:19:38.185 }, 00:19:38.185 "auth": { 00:19:38.185 "state": "completed", 00:19:38.185 "digest": "sha256", 00:19:38.185 "dhgroup": "ffdhe4096" 00:19:38.185 } 00:19:38.185 } 00:19:38.185 ]' 00:19:38.185 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.185 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.185 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.185 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:38.185 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.443 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.443 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.443 00:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.701 00:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:19:38.701 00:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:19:39.635 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.635 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.635 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.635 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.635 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.635 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.635 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.635 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:39.635 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:39.893 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:39.893 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.893 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:39.893 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:39.893 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:39.893 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.893 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.893 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.893 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.893 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.893 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.893 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.893 00:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.459 00:19:40.459 00:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.459 00:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.459 00:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.717 00:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.717 00:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.717 00:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.717 00:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.717 00:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.717 00:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.717 { 00:19:40.717 "cntlid": 33, 00:19:40.717 "qid": 0, 00:19:40.717 "state": "enabled", 00:19:40.717 "thread": "nvmf_tgt_poll_group_000", 00:19:40.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:40.717 "listen_address": { 00:19:40.717 "trtype": "TCP", 00:19:40.717 "adrfam": "IPv4", 00:19:40.717 "traddr": "10.0.0.2", 00:19:40.717 "trsvcid": "4420" 00:19:40.717 }, 00:19:40.717 "peer_address": { 00:19:40.717 "trtype": "TCP", 00:19:40.717 "adrfam": "IPv4", 00:19:40.717 "traddr": "10.0.0.1", 00:19:40.717 "trsvcid": "42646" 00:19:40.717 }, 00:19:40.717 "auth": { 00:19:40.717 "state": "completed", 00:19:40.717 "digest": "sha256", 00:19:40.717 "dhgroup": "ffdhe6144" 00:19:40.717 } 00:19:40.717 } 00:19:40.717 ]' 00:19:40.717 00:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.717 00:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.717 00:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.717 00:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:40.717 00:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.974 00:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.974 00:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.974 00:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.232 00:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:19:41.232 00:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:19:42.229 00:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.229 00:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.229 00:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.229 00:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.229 00:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.229 00:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.229 00:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:42.229 00:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:42.229 00:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:42.229 00:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.229 00:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:42.229 00:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:42.229 00:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:42.229 00:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.229 00:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.229 00:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.229 00:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.229 00:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.229 00:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.229 00:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.229 00:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.793 00:19:42.793 00:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.793 00:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.793 00:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.368 00:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.368 00:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.368 00:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.368 00:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.368 00:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.368 00:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.368 { 00:19:43.368 "cntlid": 35, 00:19:43.368 "qid": 0, 00:19:43.368 "state": "enabled", 00:19:43.368 "thread": "nvmf_tgt_poll_group_000", 00:19:43.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:43.368 "listen_address": { 00:19:43.368 "trtype": "TCP", 00:19:43.368 "adrfam": "IPv4", 00:19:43.368 "traddr": "10.0.0.2", 00:19:43.368 "trsvcid": "4420" 00:19:43.368 }, 00:19:43.368 "peer_address": { 00:19:43.368 "trtype": "TCP", 00:19:43.368 "adrfam": "IPv4", 00:19:43.368 "traddr": "10.0.0.1", 00:19:43.368 "trsvcid": "42678" 00:19:43.368 }, 00:19:43.368 "auth": { 00:19:43.368 "state": "completed", 00:19:43.368 "digest": "sha256", 00:19:43.368 "dhgroup": "ffdhe6144" 00:19:43.368 } 00:19:43.368 } 00:19:43.368 ]' 00:19:43.368 00:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.368 00:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.368 00:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.368 00:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:43.368 00:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.368 00:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.368 00:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.368 00:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.626 00:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:19:43.626 00:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:19:44.560 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.560 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.560 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.560 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.560 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.560 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.560 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.560 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.817 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:44.817 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.817 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.817 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:44.817 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:44.817 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.817 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.817 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.817 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.818 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.818 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.818 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.818 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.384 00:19:45.384 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.384 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.384 00:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.643 00:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.643 00:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.643 00:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.643 00:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.643 00:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.643 00:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.643 { 00:19:45.643 "cntlid": 37, 00:19:45.643 "qid": 0, 00:19:45.643 "state": "enabled", 00:19:45.643 "thread": "nvmf_tgt_poll_group_000", 00:19:45.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:45.643 "listen_address": { 00:19:45.643 "trtype": "TCP", 00:19:45.643 "adrfam": "IPv4", 00:19:45.643 "traddr": "10.0.0.2", 00:19:45.643 "trsvcid": "4420" 00:19:45.643 }, 00:19:45.643 "peer_address": { 00:19:45.644 "trtype": "TCP", 00:19:45.644 "adrfam": "IPv4", 00:19:45.644 "traddr": "10.0.0.1", 00:19:45.644 "trsvcid": "42702" 00:19:45.644 }, 00:19:45.644 "auth": { 00:19:45.644 "state": "completed", 00:19:45.644 "digest": "sha256", 00:19:45.644 "dhgroup": "ffdhe6144" 00:19:45.644 } 00:19:45.644 } 00:19:45.644 ]' 00:19:45.644 00:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.644 00:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.644 00:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.902 00:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:45.902 00:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.902 00:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.902 00:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.902 00:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.161 00:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:19:46.161 00:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:19:47.095 00:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.095 00:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.095 00:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.095 00:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.095 00:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.095 00:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.095 00:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:47.095 00:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:47.353 00:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:47.353 00:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.353 00:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:47.353 00:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:47.353 00:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:47.353 00:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.353 00:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:47.353 00:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.353 00:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.353 00:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.353 00:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:47.353 00:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.353 00:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.919 00:19:47.919 00:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.919 00:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.919 00:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.178 00:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.178 00:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.178 00:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.178 00:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.178 00:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.178 00:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.178 { 00:19:48.178 "cntlid": 39, 00:19:48.178 "qid": 0, 00:19:48.178 "state": "enabled", 00:19:48.178 "thread": "nvmf_tgt_poll_group_000", 00:19:48.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:48.178 "listen_address": { 00:19:48.178 "trtype": "TCP", 00:19:48.178 "adrfam": "IPv4", 00:19:48.178 "traddr": "10.0.0.2", 00:19:48.178 "trsvcid": "4420" 00:19:48.178 }, 00:19:48.178 "peer_address": { 00:19:48.178 "trtype": "TCP", 00:19:48.178 "adrfam": "IPv4", 00:19:48.178 "traddr": "10.0.0.1", 00:19:48.178 "trsvcid": "41454" 00:19:48.178 }, 00:19:48.178 "auth": { 00:19:48.178 "state": "completed", 00:19:48.178 "digest": "sha256", 00:19:48.178 "dhgroup": "ffdhe6144" 00:19:48.178 } 00:19:48.178 } 00:19:48.178 ]' 00:19:48.178 00:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.178 00:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.178 00:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.436 00:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:48.436 00:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.436 00:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.436 00:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.436 00:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.695 00:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:19:48.695 00:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:19:49.628 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.628 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.628 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.628 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.628 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.628 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.628 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.628 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:49.628 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:49.886 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:49.886 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.886 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:49.886 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:49.886 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:49.886 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.886 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.886 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.886 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.886 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.886 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.886 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.886 00:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.846 00:19:50.846 00:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.846 00:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.846 00:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.104 00:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.104 00:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.104 00:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.104 00:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.104 00:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.104 00:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.104 { 00:19:51.104 "cntlid": 41, 00:19:51.104 "qid": 0, 00:19:51.104 "state": "enabled", 00:19:51.104 "thread": "nvmf_tgt_poll_group_000", 00:19:51.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:51.104 "listen_address": { 00:19:51.104 "trtype": "TCP", 00:19:51.104 "adrfam": "IPv4", 00:19:51.104 "traddr": "10.0.0.2", 00:19:51.104 "trsvcid": "4420" 00:19:51.104 }, 00:19:51.104 "peer_address": { 00:19:51.104 "trtype": "TCP", 00:19:51.104 "adrfam": "IPv4", 00:19:51.104 "traddr": "10.0.0.1", 00:19:51.104 "trsvcid": "41492" 00:19:51.104 }, 00:19:51.104 "auth": { 00:19:51.104 "state": "completed", 00:19:51.104 "digest": "sha256", 00:19:51.104 "dhgroup": "ffdhe8192" 00:19:51.104 } 00:19:51.104 } 00:19:51.104 ]' 00:19:51.104 00:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.104 00:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.104 00:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.104 00:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:51.104 00:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.360 00:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.360 00:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.361 00:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.618 00:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:19:51.618 00:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:19:52.555 00:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.555 00:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.555 00:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.555 00:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.555 00:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.555 00:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.555 00:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:52.555 00:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:52.812 00:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:52.812 00:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.812 00:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.812 00:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:52.812 00:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:52.812 00:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.812 00:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.812 00:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.812 00:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.812 00:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.812 00:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.812 00:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.812 00:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.746 00:19:53.746 00:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.746 00:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.746 00:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.003 00:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.003 00:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.003 00:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.003 00:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.003 00:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.003 00:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.003 { 00:19:54.003 "cntlid": 43, 00:19:54.003 "qid": 0, 00:19:54.003 "state": "enabled", 00:19:54.003 "thread": "nvmf_tgt_poll_group_000", 00:19:54.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:54.003 "listen_address": { 00:19:54.003 "trtype": "TCP", 00:19:54.003 "adrfam": "IPv4", 00:19:54.003 "traddr": "10.0.0.2", 00:19:54.003 "trsvcid": "4420" 00:19:54.003 }, 00:19:54.003 "peer_address": { 00:19:54.003 "trtype": "TCP", 00:19:54.003 "adrfam": "IPv4", 00:19:54.003 "traddr": "10.0.0.1", 00:19:54.003 "trsvcid": "41526" 00:19:54.003 }, 00:19:54.003 "auth": { 00:19:54.003 "state": "completed", 00:19:54.003 "digest": "sha256", 00:19:54.003 "dhgroup": "ffdhe8192" 00:19:54.003 } 00:19:54.003 } 00:19:54.003 ]' 00:19:54.003 00:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.003 00:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.003 00:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.260 00:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:54.260 00:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.260 00:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.261 00:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.261 00:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.518 00:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:19:54.518 00:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:19:55.450 00:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.450 00:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.450 00:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.450 00:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.450 00:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.450 00:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.450 00:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:55.450 00:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:55.708 00:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:55.708 00:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.708 00:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.708 00:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:55.708 00:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:55.708 00:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.708 00:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.708 00:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.708 00:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.708 00:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.708 00:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.708 00:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.708 00:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.642 00:19:56.642 00:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.642 00:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.642 00:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.900 00:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.900 00:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.900 00:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.900 00:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.900 00:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.900 00:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.900 { 00:19:56.900 "cntlid": 45, 00:19:56.900 "qid": 0, 00:19:56.900 "state": "enabled", 00:19:56.900 "thread": "nvmf_tgt_poll_group_000", 00:19:56.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:56.900 "listen_address": { 00:19:56.900 "trtype": "TCP", 00:19:56.900 "adrfam": "IPv4", 00:19:56.900 "traddr": "10.0.0.2", 00:19:56.900 "trsvcid": "4420" 00:19:56.900 }, 00:19:56.900 "peer_address": { 00:19:56.900 "trtype": "TCP", 00:19:56.900 "adrfam": "IPv4", 00:19:56.900 "traddr": "10.0.0.1", 00:19:56.900 "trsvcid": "41554" 00:19:56.900 }, 00:19:56.900 "auth": { 00:19:56.900 "state": "completed", 00:19:56.900 "digest": "sha256", 00:19:56.900 "dhgroup": "ffdhe8192" 00:19:56.900 } 00:19:56.900 } 00:19:56.900 ]' 00:19:56.900 00:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.900 00:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.900 00:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.900 00:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:56.900 00:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.157 00:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.157 00:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.157 00:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.416 00:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:19:57.416 00:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:19:58.349 00:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.349 00:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.349 00:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.349 00:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.349 00:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.349 00:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.349 00:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.349 00:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.607 00:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:58.607 00:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.607 00:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.607 00:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:58.607 00:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:58.607 00:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.607 00:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:58.607 00:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.607 00:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.607 00:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.607 00:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:58.607 00:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.607 00:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.538 00:19:59.538 00:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.538 00:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.538 00:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.795 00:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.795 00:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.795 00:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.795 00:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.795 00:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.795 00:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.795 { 00:19:59.795 "cntlid": 47, 00:19:59.795 "qid": 0, 00:19:59.795 "state": "enabled", 00:19:59.795 "thread": "nvmf_tgt_poll_group_000", 00:19:59.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:59.795 "listen_address": { 00:19:59.795 "trtype": "TCP", 00:19:59.795 "adrfam": "IPv4", 00:19:59.795 "traddr": "10.0.0.2", 00:19:59.795 "trsvcid": "4420" 00:19:59.795 }, 00:19:59.795 "peer_address": { 00:19:59.795 "trtype": "TCP", 00:19:59.795 "adrfam": "IPv4", 00:19:59.795 "traddr": "10.0.0.1", 00:19:59.795 "trsvcid": "45140" 00:19:59.795 }, 00:19:59.795 "auth": { 00:19:59.795 "state": "completed", 00:19:59.795 "digest": "sha256", 00:19:59.795 "dhgroup": "ffdhe8192" 00:19:59.795 } 00:19:59.795 } 00:19:59.795 ]' 00:19:59.795 00:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.795 00:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.795 00:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.795 00:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:59.796 00:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.053 00:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.053 00:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.053 00:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.311 00:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:20:00.311 00:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:20:01.244 00:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.245 00:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.245 00:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.245 00:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.245 00:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.245 00:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:01.245 00:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.245 00:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.245 00:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:01.245 00:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:01.503 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:01.503 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.503 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:01.503 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:01.503 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:01.503 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.503 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.503 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.503 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.503 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.503 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.503 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.503 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.761 00:20:01.761 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.761 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.761 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.020 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.020 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.020 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.020 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.020 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.020 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.020 { 00:20:02.020 "cntlid": 49, 00:20:02.020 "qid": 0, 00:20:02.020 "state": "enabled", 00:20:02.020 "thread": "nvmf_tgt_poll_group_000", 00:20:02.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:02.020 "listen_address": { 00:20:02.020 "trtype": "TCP", 00:20:02.020 "adrfam": "IPv4", 00:20:02.020 "traddr": "10.0.0.2", 00:20:02.020 "trsvcid": "4420" 00:20:02.020 }, 00:20:02.020 "peer_address": { 00:20:02.020 "trtype": "TCP", 00:20:02.020 "adrfam": "IPv4", 00:20:02.020 "traddr": "10.0.0.1", 00:20:02.020 "trsvcid": "45176" 00:20:02.020 }, 00:20:02.020 "auth": { 00:20:02.020 "state": "completed", 00:20:02.020 "digest": "sha384", 00:20:02.020 "dhgroup": "null" 00:20:02.020 } 00:20:02.020 } 00:20:02.020 ]' 00:20:02.020 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.020 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.020 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.278 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:02.278 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.278 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.278 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.278 00:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.540 00:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:20:02.540 00:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:20:03.474 00:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.474 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.474 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.474 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.474 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.474 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.474 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:03.474 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:03.731 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:03.731 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.731 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:03.731 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:03.731 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:03.731 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.731 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.731 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.731 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.731 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.731 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.732 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.732 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.295 00:20:04.295 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.295 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.295 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.295 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.295 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.296 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.296 00:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.552 00:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.552 00:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.552 { 00:20:04.552 "cntlid": 51, 00:20:04.552 "qid": 0, 00:20:04.552 "state": "enabled", 00:20:04.552 "thread": "nvmf_tgt_poll_group_000", 00:20:04.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:04.552 "listen_address": { 00:20:04.552 "trtype": "TCP", 00:20:04.552 "adrfam": "IPv4", 00:20:04.552 "traddr": "10.0.0.2", 00:20:04.552 "trsvcid": "4420" 00:20:04.552 }, 00:20:04.552 "peer_address": { 00:20:04.552 "trtype": "TCP", 00:20:04.552 "adrfam": "IPv4", 00:20:04.552 "traddr": "10.0.0.1", 00:20:04.552 "trsvcid": "45212" 00:20:04.552 }, 00:20:04.552 "auth": { 00:20:04.552 "state": "completed", 00:20:04.552 "digest": "sha384", 00:20:04.552 "dhgroup": "null" 00:20:04.552 } 00:20:04.552 } 00:20:04.552 ]' 00:20:04.552 00:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.552 00:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.552 00:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.552 00:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:04.552 00:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.552 00:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.553 00:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.553 00:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.810 00:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:20:04.810 00:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:20:05.742 00:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.742 00:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.742 00:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.742 00:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.742 00:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.742 00:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.742 00:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:05.742 00:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:06.000 00:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:06.000 00:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.000 00:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:06.000 00:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:06.000 00:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:06.000 00:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.000 00:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.000 00:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.000 00:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.258 00:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.258 00:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.258 00:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.258 00:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.515 00:20:06.516 00:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.516 00:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.516 00:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.774 00:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.774 00:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.774 00:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.774 00:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.774 00:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.774 00:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.774 { 00:20:06.774 "cntlid": 53, 00:20:06.774 "qid": 0, 00:20:06.774 "state": "enabled", 00:20:06.774 "thread": "nvmf_tgt_poll_group_000", 00:20:06.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:06.774 "listen_address": { 00:20:06.774 "trtype": "TCP", 00:20:06.774 "adrfam": "IPv4", 00:20:06.774 "traddr": "10.0.0.2", 00:20:06.774 "trsvcid": "4420" 00:20:06.774 }, 00:20:06.774 "peer_address": { 00:20:06.774 "trtype": "TCP", 00:20:06.774 "adrfam": "IPv4", 00:20:06.774 "traddr": "10.0.0.1", 00:20:06.774 "trsvcid": "45230" 00:20:06.774 }, 00:20:06.774 "auth": { 00:20:06.774 "state": "completed", 00:20:06.774 "digest": "sha384", 00:20:06.774 "dhgroup": "null" 00:20:06.774 } 00:20:06.774 } 00:20:06.774 ]' 00:20:06.774 00:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.774 00:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.774 00:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.774 00:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:06.774 00:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.032 00:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.032 00:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.032 00:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.290 00:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:20:07.290 00:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:20:08.224 00:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.224 00:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.224 00:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.224 00:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.224 00:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.224 00:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.224 00:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:08.224 00:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:08.481 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:08.481 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.481 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:08.481 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:08.481 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:08.481 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.481 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:08.481 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.481 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.481 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.481 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:08.481 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.481 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.738 00:20:08.738 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.738 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.738 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.996 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.996 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.996 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.996 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.996 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.996 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.996 { 00:20:08.996 "cntlid": 55, 00:20:08.996 "qid": 0, 00:20:08.996 "state": "enabled", 00:20:08.996 "thread": "nvmf_tgt_poll_group_000", 00:20:08.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:08.996 "listen_address": { 00:20:08.996 "trtype": "TCP", 00:20:08.996 "adrfam": "IPv4", 00:20:08.996 "traddr": "10.0.0.2", 00:20:08.996 "trsvcid": "4420" 00:20:08.996 }, 00:20:08.996 "peer_address": { 00:20:08.996 "trtype": "TCP", 00:20:08.996 "adrfam": "IPv4", 00:20:08.996 "traddr": "10.0.0.1", 00:20:08.996 "trsvcid": "55544" 00:20:08.996 }, 00:20:08.996 "auth": { 00:20:08.996 "state": "completed", 00:20:08.996 "digest": "sha384", 00:20:08.996 "dhgroup": "null" 00:20:08.996 } 00:20:08.996 } 00:20:08.996 ]' 00:20:08.996 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.254 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.254 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.254 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:09.254 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.254 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.254 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.254 00:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.511 00:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:20:09.511 00:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:20:10.444 00:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.444 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.444 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.444 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.444 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.444 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.444 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.444 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:10.444 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:10.702 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:10.702 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.702 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:10.702 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:10.702 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:10.702 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.702 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.702 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.702 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.702 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.702 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.702 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.702 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.268 00:20:11.268 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.268 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.268 00:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.526 00:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.526 00:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.526 00:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.526 00:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.526 00:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.526 00:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.526 { 00:20:11.526 "cntlid": 57, 00:20:11.526 "qid": 0, 00:20:11.526 "state": "enabled", 00:20:11.526 "thread": "nvmf_tgt_poll_group_000", 00:20:11.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:11.526 "listen_address": { 00:20:11.526 "trtype": "TCP", 00:20:11.526 "adrfam": "IPv4", 00:20:11.526 "traddr": "10.0.0.2", 00:20:11.526 "trsvcid": "4420" 00:20:11.526 }, 00:20:11.526 "peer_address": { 00:20:11.526 "trtype": "TCP", 00:20:11.526 "adrfam": "IPv4", 00:20:11.526 "traddr": "10.0.0.1", 00:20:11.526 "trsvcid": "55568" 00:20:11.526 }, 00:20:11.526 "auth": { 00:20:11.526 "state": "completed", 00:20:11.526 "digest": "sha384", 00:20:11.526 "dhgroup": "ffdhe2048" 00:20:11.526 } 00:20:11.526 } 00:20:11.526 ]' 00:20:11.526 00:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.526 00:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.526 00:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.526 00:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:11.526 00:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.526 00:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.526 00:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.526 00:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.784 00:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:20:11.784 00:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:20:12.828 00:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.828 00:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.828 00:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.828 00:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.828 00:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.828 00:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.828 00:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:12.828 00:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:13.085 00:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:13.085 00:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.085 00:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:13.085 00:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:13.085 00:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:13.085 00:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.085 00:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.085 00:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.085 00:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.085 00:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.085 00:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.085 00:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.085 00:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.648 00:20:13.648 00:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.648 00:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.648 00:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.905 00:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.905 00:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.905 00:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.905 00:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.905 00:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.905 00:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.905 { 00:20:13.905 "cntlid": 59, 00:20:13.905 "qid": 0, 00:20:13.905 "state": "enabled", 00:20:13.905 "thread": "nvmf_tgt_poll_group_000", 00:20:13.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:13.905 "listen_address": { 00:20:13.905 "trtype": "TCP", 00:20:13.905 "adrfam": "IPv4", 00:20:13.905 "traddr": "10.0.0.2", 00:20:13.905 "trsvcid": "4420" 00:20:13.905 }, 00:20:13.905 "peer_address": { 00:20:13.905 "trtype": "TCP", 00:20:13.905 "adrfam": "IPv4", 00:20:13.905 "traddr": "10.0.0.1", 00:20:13.905 "trsvcid": "55578" 00:20:13.905 }, 00:20:13.905 "auth": { 00:20:13.905 "state": "completed", 00:20:13.905 "digest": "sha384", 00:20:13.905 "dhgroup": "ffdhe2048" 00:20:13.905 } 00:20:13.905 } 00:20:13.905 ]' 00:20:13.905 00:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.905 00:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.905 00:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.905 00:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:13.905 00:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.905 00:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.905 00:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.905 00:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.162 00:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:20:14.162 00:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:20:15.094 00:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.094 00:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.094 00:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.094 00:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.094 00:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.094 00:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.094 00:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:15.094 00:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:15.352 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:15.352 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.352 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:15.352 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:15.352 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:15.352 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.352 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.352 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.352 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.352 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.352 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.352 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.352 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.918 00:20:15.918 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.918 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.918 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.176 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.176 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.176 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.176 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.176 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.176 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.176 { 00:20:16.176 "cntlid": 61, 00:20:16.176 "qid": 0, 00:20:16.176 "state": "enabled", 00:20:16.176 "thread": "nvmf_tgt_poll_group_000", 00:20:16.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:16.176 "listen_address": { 00:20:16.176 "trtype": "TCP", 00:20:16.176 "adrfam": "IPv4", 00:20:16.176 "traddr": "10.0.0.2", 00:20:16.176 "trsvcid": "4420" 00:20:16.176 }, 00:20:16.176 "peer_address": { 00:20:16.176 "trtype": "TCP", 00:20:16.176 "adrfam": "IPv4", 00:20:16.176 "traddr": "10.0.0.1", 00:20:16.176 "trsvcid": "55602" 00:20:16.176 }, 00:20:16.176 "auth": { 00:20:16.176 "state": "completed", 00:20:16.176 "digest": "sha384", 00:20:16.176 "dhgroup": "ffdhe2048" 00:20:16.176 } 00:20:16.176 } 00:20:16.176 ]' 00:20:16.176 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.176 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.176 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.176 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:16.176 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.176 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.176 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.176 00:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.434 00:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:20:16.434 00:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:20:17.807 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.807 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.807 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.807 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.807 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.807 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.807 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:17.807 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:17.807 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:17.807 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.807 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:17.807 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:17.807 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:17.807 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.807 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:17.807 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.807 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.807 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.807 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:17.807 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.807 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.064 00:20:18.064 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.064 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.064 00:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.320 00:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.577 00:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.577 00:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.577 00:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.577 00:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.577 00:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.577 { 00:20:18.577 "cntlid": 63, 00:20:18.577 "qid": 0, 00:20:18.577 "state": "enabled", 00:20:18.577 "thread": "nvmf_tgt_poll_group_000", 00:20:18.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:18.577 "listen_address": { 00:20:18.577 "trtype": "TCP", 00:20:18.577 "adrfam": "IPv4", 00:20:18.577 "traddr": "10.0.0.2", 00:20:18.577 "trsvcid": "4420" 00:20:18.577 }, 00:20:18.577 "peer_address": { 00:20:18.577 "trtype": "TCP", 00:20:18.577 "adrfam": "IPv4", 00:20:18.577 "traddr": "10.0.0.1", 00:20:18.577 "trsvcid": "37176" 00:20:18.577 }, 00:20:18.577 "auth": { 00:20:18.577 "state": "completed", 00:20:18.577 "digest": "sha384", 00:20:18.577 "dhgroup": "ffdhe2048" 00:20:18.577 } 00:20:18.577 } 00:20:18.577 ]' 00:20:18.577 00:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.577 00:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.577 00:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.577 00:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:18.577 00:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.577 00:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.577 00:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.577 00:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.835 00:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:20:18.835 00:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:20:19.768 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.768 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.768 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.768 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.768 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.768 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.768 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.768 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:19.768 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:20.335 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:20.335 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.335 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.335 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:20.335 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:20.335 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.335 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.335 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.335 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.335 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.335 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.335 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.335 00:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.594 00:20:20.594 00:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.594 00:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.594 00:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.852 00:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.852 00:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.852 00:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.852 00:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.852 00:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.852 00:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.852 { 00:20:20.852 "cntlid": 65, 00:20:20.852 "qid": 0, 00:20:20.852 "state": "enabled", 00:20:20.852 "thread": "nvmf_tgt_poll_group_000", 00:20:20.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:20.852 "listen_address": { 00:20:20.852 "trtype": "TCP", 00:20:20.852 "adrfam": "IPv4", 00:20:20.852 "traddr": "10.0.0.2", 00:20:20.852 "trsvcid": "4420" 00:20:20.852 }, 00:20:20.852 "peer_address": { 00:20:20.852 "trtype": "TCP", 00:20:20.852 "adrfam": "IPv4", 00:20:20.852 "traddr": "10.0.0.1", 00:20:20.852 "trsvcid": "37194" 00:20:20.852 }, 00:20:20.852 "auth": { 00:20:20.852 "state": "completed", 00:20:20.852 "digest": "sha384", 00:20:20.852 "dhgroup": "ffdhe3072" 00:20:20.852 } 00:20:20.852 } 00:20:20.852 ]' 00:20:20.852 00:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.852 00:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.852 00:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.852 00:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:20.852 00:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.852 00:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.852 00:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.852 00:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.111 00:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:20:21.111 00:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:20:22.047 00:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.047 00:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.047 00:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.047 00:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.047 00:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.047 00:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.047 00:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:22.047 00:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:22.614 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:22.614 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.614 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:22.614 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:22.614 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:22.614 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.614 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.614 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.614 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.614 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.614 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.614 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.614 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.876 00:20:22.876 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.876 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.876 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.134 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.134 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.134 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.134 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.134 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.134 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.134 { 00:20:23.134 "cntlid": 67, 00:20:23.134 "qid": 0, 00:20:23.134 "state": "enabled", 00:20:23.134 "thread": "nvmf_tgt_poll_group_000", 00:20:23.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:23.134 "listen_address": { 00:20:23.134 "trtype": "TCP", 00:20:23.134 "adrfam": "IPv4", 00:20:23.134 "traddr": "10.0.0.2", 00:20:23.134 "trsvcid": "4420" 00:20:23.134 }, 00:20:23.134 "peer_address": { 00:20:23.134 "trtype": "TCP", 00:20:23.134 "adrfam": "IPv4", 00:20:23.134 "traddr": "10.0.0.1", 00:20:23.134 "trsvcid": "37206" 00:20:23.134 }, 00:20:23.134 "auth": { 00:20:23.134 "state": "completed", 00:20:23.134 "digest": "sha384", 00:20:23.134 "dhgroup": "ffdhe3072" 00:20:23.134 } 00:20:23.134 } 00:20:23.134 ]' 00:20:23.134 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.134 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.134 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.134 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:23.134 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.134 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.134 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.134 00:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.391 00:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:20:23.391 00:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:20:24.762 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.762 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.762 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.762 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.762 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.762 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.762 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:24.762 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:24.762 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:24.762 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.762 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:24.762 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:24.762 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:24.762 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.762 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.762 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.762 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.762 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.762 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.762 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.762 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.020 00:20:25.278 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.278 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.278 00:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.536 00:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.536 00:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.536 00:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.536 00:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.536 00:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.536 00:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.536 { 00:20:25.536 "cntlid": 69, 00:20:25.536 "qid": 0, 00:20:25.536 "state": "enabled", 00:20:25.536 "thread": "nvmf_tgt_poll_group_000", 00:20:25.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:25.536 "listen_address": { 00:20:25.536 "trtype": "TCP", 00:20:25.536 "adrfam": "IPv4", 00:20:25.536 "traddr": "10.0.0.2", 00:20:25.536 "trsvcid": "4420" 00:20:25.536 }, 00:20:25.536 "peer_address": { 00:20:25.536 "trtype": "TCP", 00:20:25.536 "adrfam": "IPv4", 00:20:25.536 "traddr": "10.0.0.1", 00:20:25.536 "trsvcid": "37240" 00:20:25.536 }, 00:20:25.536 "auth": { 00:20:25.536 "state": "completed", 00:20:25.536 "digest": "sha384", 00:20:25.536 "dhgroup": "ffdhe3072" 00:20:25.536 } 00:20:25.536 } 00:20:25.536 ]' 00:20:25.536 00:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.536 00:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.536 00:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.536 00:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:25.536 00:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.536 00:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.536 00:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.536 00:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.794 00:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:20:25.794 00:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:20:26.727 00:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.727 00:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.727 00:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.727 00:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.727 00:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.727 00:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.727 00:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:26.727 00:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:26.985 00:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:26.985 00:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.985 00:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:26.985 00:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:26.985 00:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:26.985 00:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.985 00:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:26.985 00:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.985 00:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.985 00:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.985 00:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:26.985 00:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:26.985 00:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.552 00:20:27.552 00:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.552 00:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.552 00:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.810 00:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.810 00:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.810 00:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.810 00:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.810 00:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.810 00:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.810 { 00:20:27.810 "cntlid": 71, 00:20:27.810 "qid": 0, 00:20:27.810 "state": "enabled", 00:20:27.810 "thread": "nvmf_tgt_poll_group_000", 00:20:27.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:27.810 "listen_address": { 00:20:27.810 "trtype": "TCP", 00:20:27.810 "adrfam": "IPv4", 00:20:27.810 "traddr": "10.0.0.2", 00:20:27.810 "trsvcid": "4420" 00:20:27.810 }, 00:20:27.810 "peer_address": { 00:20:27.810 "trtype": "TCP", 00:20:27.810 "adrfam": "IPv4", 00:20:27.810 "traddr": "10.0.0.1", 00:20:27.810 "trsvcid": "59582" 00:20:27.810 }, 00:20:27.810 "auth": { 00:20:27.810 "state": "completed", 00:20:27.810 "digest": "sha384", 00:20:27.810 "dhgroup": "ffdhe3072" 00:20:27.810 } 00:20:27.810 } 00:20:27.810 ]' 00:20:27.810 00:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.810 00:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.810 00:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.810 00:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:27.810 00:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.810 00:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.810 00:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.810 00:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.069 00:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:20:28.069 00:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:20:29.002 00:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.002 00:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.002 00:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.002 00:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.260 00:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.260 00:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:29.260 00:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.260 00:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:29.260 00:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:29.519 00:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:29.519 00:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.519 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:29.519 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:29.519 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:29.519 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.519 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.519 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.519 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.519 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.519 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.519 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.519 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.777 00:20:29.777 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.777 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.777 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.035 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.035 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.035 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.035 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.035 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.035 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.035 { 00:20:30.035 "cntlid": 73, 00:20:30.035 "qid": 0, 00:20:30.035 "state": "enabled", 00:20:30.035 "thread": "nvmf_tgt_poll_group_000", 00:20:30.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:30.035 "listen_address": { 00:20:30.035 "trtype": "TCP", 00:20:30.035 "adrfam": "IPv4", 00:20:30.035 "traddr": "10.0.0.2", 00:20:30.035 "trsvcid": "4420" 00:20:30.035 }, 00:20:30.035 "peer_address": { 00:20:30.035 "trtype": "TCP", 00:20:30.035 "adrfam": "IPv4", 00:20:30.035 "traddr": "10.0.0.1", 00:20:30.035 "trsvcid": "59618" 00:20:30.035 }, 00:20:30.035 "auth": { 00:20:30.035 "state": "completed", 00:20:30.035 "digest": "sha384", 00:20:30.035 "dhgroup": "ffdhe4096" 00:20:30.035 } 00:20:30.035 } 00:20:30.035 ]' 00:20:30.035 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.035 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.293 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.293 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:30.293 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.293 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.293 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.293 00:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.551 00:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:20:30.551 00:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:20:31.485 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.485 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.485 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.485 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.485 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.485 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.485 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:31.485 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:31.743 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:31.743 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.743 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.743 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:31.743 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:31.743 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.743 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.743 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.743 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.743 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.743 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.743 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.743 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.308 00:20:32.308 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.308 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.308 00:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.566 00:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.566 00:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.566 00:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.566 00:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.566 00:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.566 00:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.566 { 00:20:32.566 "cntlid": 75, 00:20:32.566 "qid": 0, 00:20:32.566 "state": "enabled", 00:20:32.566 "thread": "nvmf_tgt_poll_group_000", 00:20:32.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:32.566 "listen_address": { 00:20:32.566 "trtype": "TCP", 00:20:32.566 "adrfam": "IPv4", 00:20:32.566 "traddr": "10.0.0.2", 00:20:32.566 "trsvcid": "4420" 00:20:32.566 }, 00:20:32.566 "peer_address": { 00:20:32.566 "trtype": "TCP", 00:20:32.566 "adrfam": "IPv4", 00:20:32.566 "traddr": "10.0.0.1", 00:20:32.566 "trsvcid": "59644" 00:20:32.566 }, 00:20:32.566 "auth": { 00:20:32.566 "state": "completed", 00:20:32.566 "digest": "sha384", 00:20:32.566 "dhgroup": "ffdhe4096" 00:20:32.566 } 00:20:32.566 } 00:20:32.566 ]' 00:20:32.566 00:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.566 00:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.566 00:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.566 00:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:32.566 00:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.566 00:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.566 00:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.566 00:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.824 00:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:20:32.824 00:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:20:33.754 00:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.010 00:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.010 00:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.010 00:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.010 00:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.010 00:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.010 00:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:34.011 00:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:34.268 00:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:34.268 00:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.268 00:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:34.268 00:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:34.268 00:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:34.268 00:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.268 00:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.268 00:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.268 00:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.268 00:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.268 00:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.268 00:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.268 00:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.525 00:20:34.525 00:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.525 00:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.525 00:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.783 00:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.783 00:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.783 00:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.783 00:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.783 00:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.783 00:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.783 { 00:20:34.783 "cntlid": 77, 00:20:34.783 "qid": 0, 00:20:34.783 "state": "enabled", 00:20:34.783 "thread": "nvmf_tgt_poll_group_000", 00:20:34.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:34.783 "listen_address": { 00:20:34.783 "trtype": "TCP", 00:20:34.783 "adrfam": "IPv4", 00:20:34.783 "traddr": "10.0.0.2", 00:20:34.783 "trsvcid": "4420" 00:20:34.783 }, 00:20:34.783 "peer_address": { 00:20:34.783 "trtype": "TCP", 00:20:34.783 "adrfam": "IPv4", 00:20:34.783 "traddr": "10.0.0.1", 00:20:34.783 "trsvcid": "59668" 00:20:34.783 }, 00:20:34.783 "auth": { 00:20:34.783 "state": "completed", 00:20:34.783 "digest": "sha384", 00:20:34.783 "dhgroup": "ffdhe4096" 00:20:34.783 } 00:20:34.783 } 00:20:34.783 ]' 00:20:34.783 00:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.783 00:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.783 00:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.040 00:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.040 00:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.040 00:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.040 00:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.040 00:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.297 00:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:20:35.297 00:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:20:36.229 00:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.229 00:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.229 00:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.229 00:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.229 00:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.229 00:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.229 00:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.229 00:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.486 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:36.486 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.486 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.486 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:36.486 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:36.486 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.486 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:36.486 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.486 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.486 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.486 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:36.486 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.486 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.051 00:20:37.051 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.051 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.051 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.308 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.308 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.308 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.308 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.308 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.308 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.308 { 00:20:37.308 "cntlid": 79, 00:20:37.308 "qid": 0, 00:20:37.308 "state": "enabled", 00:20:37.308 "thread": "nvmf_tgt_poll_group_000", 00:20:37.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:37.308 "listen_address": { 00:20:37.308 "trtype": "TCP", 00:20:37.308 "adrfam": "IPv4", 00:20:37.308 "traddr": "10.0.0.2", 00:20:37.308 "trsvcid": "4420" 00:20:37.308 }, 00:20:37.308 "peer_address": { 00:20:37.308 "trtype": "TCP", 00:20:37.308 "adrfam": "IPv4", 00:20:37.308 "traddr": "10.0.0.1", 00:20:37.308 "trsvcid": "59698" 00:20:37.308 }, 00:20:37.308 "auth": { 00:20:37.308 "state": "completed", 00:20:37.308 "digest": "sha384", 00:20:37.308 "dhgroup": "ffdhe4096" 00:20:37.308 } 00:20:37.308 } 00:20:37.308 ]' 00:20:37.308 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.308 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.308 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.308 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.308 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.308 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.308 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.308 00:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.566 00:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:20:37.566 00:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.938 00:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.503 00:20:39.503 00:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.503 00:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.503 00:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.760 00:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.761 00:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.761 00:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.761 00:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.761 00:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.761 00:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.761 { 00:20:39.761 "cntlid": 81, 00:20:39.761 "qid": 0, 00:20:39.761 "state": "enabled", 00:20:39.761 "thread": "nvmf_tgt_poll_group_000", 00:20:39.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:39.761 "listen_address": { 00:20:39.761 "trtype": "TCP", 00:20:39.761 "adrfam": "IPv4", 00:20:39.761 "traddr": "10.0.0.2", 00:20:39.761 "trsvcid": "4420" 00:20:39.761 }, 00:20:39.761 "peer_address": { 00:20:39.761 "trtype": "TCP", 00:20:39.761 "adrfam": "IPv4", 00:20:39.761 "traddr": "10.0.0.1", 00:20:39.761 "trsvcid": "41822" 00:20:39.761 }, 00:20:39.761 "auth": { 00:20:39.761 "state": "completed", 00:20:39.761 "digest": "sha384", 00:20:39.761 "dhgroup": "ffdhe6144" 00:20:39.761 } 00:20:39.761 } 00:20:39.761 ]' 00:20:39.761 00:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.761 00:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.761 00:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.761 00:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:39.761 00:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.018 00:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.018 00:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.018 00:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.276 00:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:20:40.276 00:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:20:41.206 00:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.206 00:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.206 00:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.206 00:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.206 00:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.206 00:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.206 00:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.206 00:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.464 00:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:41.464 00:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.464 00:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.464 00:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:41.464 00:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:41.464 00:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.464 00:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.464 00:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.464 00:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.464 00:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.464 00:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.464 00:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.464 00:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.398 00:20:42.398 00:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.398 00:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.398 00:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.398 00:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.398 00:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.398 00:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.398 00:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.398 00:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.398 00:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.398 { 00:20:42.398 "cntlid": 83, 00:20:42.398 "qid": 0, 00:20:42.398 "state": "enabled", 00:20:42.398 "thread": "nvmf_tgt_poll_group_000", 00:20:42.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:42.398 "listen_address": { 00:20:42.398 "trtype": "TCP", 00:20:42.398 "adrfam": "IPv4", 00:20:42.398 "traddr": "10.0.0.2", 00:20:42.398 "trsvcid": "4420" 00:20:42.398 }, 00:20:42.398 "peer_address": { 00:20:42.398 "trtype": "TCP", 00:20:42.398 "adrfam": "IPv4", 00:20:42.398 "traddr": "10.0.0.1", 00:20:42.398 "trsvcid": "41860" 00:20:42.398 }, 00:20:42.398 "auth": { 00:20:42.399 "state": "completed", 00:20:42.399 "digest": "sha384", 00:20:42.399 "dhgroup": "ffdhe6144" 00:20:42.399 } 00:20:42.399 } 00:20:42.399 ]' 00:20:42.399 00:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.743 00:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.743 00:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.743 00:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.743 00:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.743 00:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.743 00:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.743 00:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.014 00:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:20:43.014 00:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:20:43.947 00:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.948 00:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.948 00:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.948 00:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.948 00:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.948 00:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.948 00:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.948 00:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.206 00:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:44.207 00:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.207 00:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.207 00:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:44.207 00:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:44.207 00:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.207 00:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.207 00:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.207 00:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.207 00:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.207 00:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.207 00:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.207 00:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.783 00:20:44.783 00:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.783 00:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.783 00:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.041 00:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.041 00:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.041 00:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.041 00:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.041 00:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.041 00:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.041 { 00:20:45.041 "cntlid": 85, 00:20:45.041 "qid": 0, 00:20:45.041 "state": "enabled", 00:20:45.041 "thread": "nvmf_tgt_poll_group_000", 00:20:45.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:45.041 "listen_address": { 00:20:45.041 "trtype": "TCP", 00:20:45.041 "adrfam": "IPv4", 00:20:45.041 "traddr": "10.0.0.2", 00:20:45.041 "trsvcid": "4420" 00:20:45.041 }, 00:20:45.041 "peer_address": { 00:20:45.041 "trtype": "TCP", 00:20:45.041 "adrfam": "IPv4", 00:20:45.041 "traddr": "10.0.0.1", 00:20:45.041 "trsvcid": "41880" 00:20:45.041 }, 00:20:45.041 "auth": { 00:20:45.041 "state": "completed", 00:20:45.041 "digest": "sha384", 00:20:45.041 "dhgroup": "ffdhe6144" 00:20:45.041 } 00:20:45.041 } 00:20:45.041 ]' 00:20:45.041 00:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.041 00:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.041 00:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.041 00:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:45.041 00:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.041 00:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.041 00:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.041 00:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.607 00:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:20:45.607 00:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:20:46.540 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.540 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.540 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.540 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.540 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.540 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.540 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.540 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.798 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:46.798 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.798 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.798 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:46.798 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:46.798 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.798 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:46.798 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.798 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.798 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.798 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:46.798 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.798 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.365 00:20:47.365 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.365 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.365 00:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.623 00:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.623 00:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.623 00:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.623 00:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.623 00:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.623 00:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.623 { 00:20:47.623 "cntlid": 87, 00:20:47.623 "qid": 0, 00:20:47.623 "state": "enabled", 00:20:47.623 "thread": "nvmf_tgt_poll_group_000", 00:20:47.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:47.624 "listen_address": { 00:20:47.624 "trtype": "TCP", 00:20:47.624 "adrfam": "IPv4", 00:20:47.624 "traddr": "10.0.0.2", 00:20:47.624 "trsvcid": "4420" 00:20:47.624 }, 00:20:47.624 "peer_address": { 00:20:47.624 "trtype": "TCP", 00:20:47.624 "adrfam": "IPv4", 00:20:47.624 "traddr": "10.0.0.1", 00:20:47.624 "trsvcid": "41904" 00:20:47.624 }, 00:20:47.624 "auth": { 00:20:47.624 "state": "completed", 00:20:47.624 "digest": "sha384", 00:20:47.624 "dhgroup": "ffdhe6144" 00:20:47.624 } 00:20:47.624 } 00:20:47.624 ]' 00:20:47.624 00:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.624 00:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.624 00:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.624 00:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:47.624 00:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.882 00:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.882 00:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.882 00:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.139 00:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:20:48.140 00:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:20:49.073 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.073 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.073 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.073 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.073 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.073 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.073 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.073 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:49.073 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:49.331 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:49.331 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.331 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.331 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:49.331 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:49.331 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.331 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.331 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.331 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.331 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.331 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.331 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.331 00:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.265 00:20:50.265 00:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.265 00:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.265 00:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.523 00:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.523 00:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.523 00:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.523 00:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.523 00:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.523 00:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.523 { 00:20:50.523 "cntlid": 89, 00:20:50.523 "qid": 0, 00:20:50.523 "state": "enabled", 00:20:50.523 "thread": "nvmf_tgt_poll_group_000", 00:20:50.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:50.523 "listen_address": { 00:20:50.523 "trtype": "TCP", 00:20:50.523 "adrfam": "IPv4", 00:20:50.523 "traddr": "10.0.0.2", 00:20:50.523 "trsvcid": "4420" 00:20:50.523 }, 00:20:50.523 "peer_address": { 00:20:50.523 "trtype": "TCP", 00:20:50.523 "adrfam": "IPv4", 00:20:50.523 "traddr": "10.0.0.1", 00:20:50.523 "trsvcid": "52820" 00:20:50.523 }, 00:20:50.523 "auth": { 00:20:50.523 "state": "completed", 00:20:50.523 "digest": "sha384", 00:20:50.523 "dhgroup": "ffdhe8192" 00:20:50.523 } 00:20:50.523 } 00:20:50.523 ]' 00:20:50.523 00:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.523 00:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.523 00:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.523 00:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:50.523 00:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.523 00:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.523 00:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.523 00:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.781 00:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:20:50.781 00:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:20:52.154 00:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.155 00:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.155 00:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.155 00:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.155 00:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.155 00:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.155 00:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.155 00:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.155 00:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:52.155 00:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.155 00:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.155 00:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:52.155 00:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:52.155 00:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.155 00:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.155 00:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.155 00:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.155 00:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.155 00:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.155 00:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.155 00:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.085 00:20:53.085 00:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.085 00:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.085 00:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.342 00:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.342 00:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.342 00:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.342 00:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.600 00:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.600 00:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.600 { 00:20:53.600 "cntlid": 91, 00:20:53.600 "qid": 0, 00:20:53.600 "state": "enabled", 00:20:53.600 "thread": "nvmf_tgt_poll_group_000", 00:20:53.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:53.600 "listen_address": { 00:20:53.600 "trtype": "TCP", 00:20:53.600 "adrfam": "IPv4", 00:20:53.600 "traddr": "10.0.0.2", 00:20:53.600 "trsvcid": "4420" 00:20:53.600 }, 00:20:53.600 "peer_address": { 00:20:53.600 "trtype": "TCP", 00:20:53.600 "adrfam": "IPv4", 00:20:53.600 "traddr": "10.0.0.1", 00:20:53.600 "trsvcid": "52842" 00:20:53.600 }, 00:20:53.600 "auth": { 00:20:53.600 "state": "completed", 00:20:53.600 "digest": "sha384", 00:20:53.600 "dhgroup": "ffdhe8192" 00:20:53.600 } 00:20:53.600 } 00:20:53.600 ]' 00:20:53.600 00:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.600 00:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.600 00:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.600 00:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:53.600 00:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.600 00:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.600 00:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.600 00:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.859 00:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:20:53.859 00:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:20:54.792 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.792 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.792 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.792 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.792 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.792 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.792 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.792 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:55.050 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:55.050 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.050 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.050 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:55.050 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:55.050 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.050 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.050 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.050 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.050 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.050 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.050 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.050 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.986 00:20:55.986 00:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.986 00:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.986 00:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.551 00:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.551 00:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.551 00:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.551 00:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.551 00:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.551 00:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.551 { 00:20:56.551 "cntlid": 93, 00:20:56.552 "qid": 0, 00:20:56.552 "state": "enabled", 00:20:56.552 "thread": "nvmf_tgt_poll_group_000", 00:20:56.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:56.552 "listen_address": { 00:20:56.552 "trtype": "TCP", 00:20:56.552 "adrfam": "IPv4", 00:20:56.552 "traddr": "10.0.0.2", 00:20:56.552 "trsvcid": "4420" 00:20:56.552 }, 00:20:56.552 "peer_address": { 00:20:56.552 "trtype": "TCP", 00:20:56.552 "adrfam": "IPv4", 00:20:56.552 "traddr": "10.0.0.1", 00:20:56.552 "trsvcid": "52876" 00:20:56.552 }, 00:20:56.552 "auth": { 00:20:56.552 "state": "completed", 00:20:56.552 "digest": "sha384", 00:20:56.552 "dhgroup": "ffdhe8192" 00:20:56.552 } 00:20:56.552 } 00:20:56.552 ]' 00:20:56.552 00:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.552 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.552 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.552 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:56.552 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.552 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.552 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.552 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.810 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:20:56.810 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:20:57.744 00:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.744 00:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.744 00:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.744 00:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.744 00:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.744 00:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.744 00:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.745 00:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:58.002 00:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:58.002 00:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.002 00:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:58.002 00:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:58.002 00:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:58.002 00:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.002 00:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:58.002 00:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.002 00:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.002 00:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.002 00:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:58.002 00:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.003 00:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.938 00:20:58.938 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.938 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.938 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.196 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.196 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.196 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.196 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.455 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.455 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.455 { 00:20:59.455 "cntlid": 95, 00:20:59.455 "qid": 0, 00:20:59.455 "state": "enabled", 00:20:59.455 "thread": "nvmf_tgt_poll_group_000", 00:20:59.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:59.455 "listen_address": { 00:20:59.455 "trtype": "TCP", 00:20:59.455 "adrfam": "IPv4", 00:20:59.455 "traddr": "10.0.0.2", 00:20:59.455 "trsvcid": "4420" 00:20:59.455 }, 00:20:59.455 "peer_address": { 00:20:59.455 "trtype": "TCP", 00:20:59.455 "adrfam": "IPv4", 00:20:59.455 "traddr": "10.0.0.1", 00:20:59.455 "trsvcid": "48500" 00:20:59.455 }, 00:20:59.455 "auth": { 00:20:59.455 "state": "completed", 00:20:59.455 "digest": "sha384", 00:20:59.455 "dhgroup": "ffdhe8192" 00:20:59.455 } 00:20:59.455 } 00:20:59.455 ]' 00:20:59.455 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.455 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.455 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.455 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:59.455 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.455 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.455 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.455 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.713 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:20:59.713 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:21:00.642 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.642 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.642 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.642 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.642 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.642 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:00.642 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.642 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.642 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:00.642 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:01.206 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:01.206 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.206 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:01.206 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:01.206 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:01.206 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.206 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.206 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.206 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.206 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.206 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.206 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.206 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.465 00:21:01.465 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.465 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.465 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.723 00:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.723 00:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.723 00:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.723 00:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.723 00:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.723 00:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.723 { 00:21:01.723 "cntlid": 97, 00:21:01.723 "qid": 0, 00:21:01.723 "state": "enabled", 00:21:01.723 "thread": "nvmf_tgt_poll_group_000", 00:21:01.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:01.723 "listen_address": { 00:21:01.723 "trtype": "TCP", 00:21:01.723 "adrfam": "IPv4", 00:21:01.723 "traddr": "10.0.0.2", 00:21:01.723 "trsvcid": "4420" 00:21:01.723 }, 00:21:01.723 "peer_address": { 00:21:01.723 "trtype": "TCP", 00:21:01.723 "adrfam": "IPv4", 00:21:01.723 "traddr": "10.0.0.1", 00:21:01.723 "trsvcid": "48530" 00:21:01.723 }, 00:21:01.723 "auth": { 00:21:01.723 "state": "completed", 00:21:01.723 "digest": "sha512", 00:21:01.723 "dhgroup": "null" 00:21:01.723 } 00:21:01.723 } 00:21:01.723 ]' 00:21:01.723 00:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.723 00:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.723 00:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.723 00:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:01.723 00:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.723 00:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.723 00:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.723 00:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.982 00:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:21:01.982 00:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:21:02.913 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.913 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.913 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.913 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.913 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.913 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.913 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:02.913 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:03.476 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:03.476 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.476 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:03.476 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:03.476 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:03.476 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.476 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.476 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.476 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.476 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.476 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.476 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.476 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.734 00:21:03.734 00:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.734 00:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.734 00:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.992 00:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.992 00:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.992 00:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.992 00:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.992 00:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.992 00:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.992 { 00:21:03.992 "cntlid": 99, 00:21:03.992 "qid": 0, 00:21:03.992 "state": "enabled", 00:21:03.992 "thread": "nvmf_tgt_poll_group_000", 00:21:03.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:03.992 "listen_address": { 00:21:03.992 "trtype": "TCP", 00:21:03.992 "adrfam": "IPv4", 00:21:03.992 "traddr": "10.0.0.2", 00:21:03.992 "trsvcid": "4420" 00:21:03.992 }, 00:21:03.992 "peer_address": { 00:21:03.992 "trtype": "TCP", 00:21:03.992 "adrfam": "IPv4", 00:21:03.992 "traddr": "10.0.0.1", 00:21:03.992 "trsvcid": "48550" 00:21:03.992 }, 00:21:03.992 "auth": { 00:21:03.992 "state": "completed", 00:21:03.992 "digest": "sha512", 00:21:03.992 "dhgroup": "null" 00:21:03.992 } 00:21:03.992 } 00:21:03.992 ]' 00:21:03.992 00:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.992 00:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.992 00:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.992 00:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:03.992 00:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.992 00:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.992 00:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.992 00:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.251 00:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:21:04.251 00:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:21:05.624 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.624 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.624 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.624 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.624 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.624 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.624 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:05.624 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:05.624 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:05.624 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.624 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.624 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:05.624 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:05.624 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.624 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.624 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.624 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.624 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.624 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.624 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.624 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.190 00:21:06.190 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.190 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.190 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.448 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.448 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.448 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.448 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.448 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.448 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.448 { 00:21:06.448 "cntlid": 101, 00:21:06.448 "qid": 0, 00:21:06.448 "state": "enabled", 00:21:06.448 "thread": "nvmf_tgt_poll_group_000", 00:21:06.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:06.448 "listen_address": { 00:21:06.448 "trtype": "TCP", 00:21:06.448 "adrfam": "IPv4", 00:21:06.448 "traddr": "10.0.0.2", 00:21:06.448 "trsvcid": "4420" 00:21:06.448 }, 00:21:06.448 "peer_address": { 00:21:06.448 "trtype": "TCP", 00:21:06.448 "adrfam": "IPv4", 00:21:06.448 "traddr": "10.0.0.1", 00:21:06.448 "trsvcid": "48576" 00:21:06.448 }, 00:21:06.448 "auth": { 00:21:06.448 "state": "completed", 00:21:06.448 "digest": "sha512", 00:21:06.448 "dhgroup": "null" 00:21:06.448 } 00:21:06.448 } 00:21:06.448 ]' 00:21:06.448 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.448 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.448 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.448 00:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:06.448 00:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.448 00:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.448 00:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.448 00:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.705 00:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:21:06.705 00:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:21:07.639 00:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.639 00:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.639 00:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.639 00:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.639 00:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.639 00:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.639 00:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:07.639 00:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:08.203 00:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:08.203 00:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.203 00:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.203 00:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:08.203 00:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:08.203 00:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.203 00:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:08.203 00:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.203 00:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.203 00:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.203 00:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:08.203 00:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:08.203 00:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:08.460 00:21:08.460 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.460 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.460 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.717 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.717 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.717 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.717 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.717 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.717 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.717 { 00:21:08.717 "cntlid": 103, 00:21:08.717 "qid": 0, 00:21:08.717 "state": "enabled", 00:21:08.717 "thread": "nvmf_tgt_poll_group_000", 00:21:08.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:08.717 "listen_address": { 00:21:08.717 "trtype": "TCP", 00:21:08.717 "adrfam": "IPv4", 00:21:08.717 "traddr": "10.0.0.2", 00:21:08.717 "trsvcid": "4420" 00:21:08.717 }, 00:21:08.717 "peer_address": { 00:21:08.717 "trtype": "TCP", 00:21:08.717 "adrfam": "IPv4", 00:21:08.717 "traddr": "10.0.0.1", 00:21:08.717 "trsvcid": "40934" 00:21:08.717 }, 00:21:08.717 "auth": { 00:21:08.717 "state": "completed", 00:21:08.717 "digest": "sha512", 00:21:08.717 "dhgroup": "null" 00:21:08.717 } 00:21:08.717 } 00:21:08.717 ]' 00:21:08.717 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.717 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.717 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.717 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:08.717 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.975 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.975 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.975 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.232 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:21:09.232 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:21:10.162 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.162 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.162 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.162 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.162 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.162 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:10.162 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.162 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.162 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.419 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:10.419 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.419 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:10.419 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:10.419 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:10.419 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.419 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.419 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.420 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.420 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.420 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.420 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.420 00:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.677 00:21:10.677 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.677 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.677 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.934 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.934 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.934 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.934 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.934 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.934 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.934 { 00:21:10.934 "cntlid": 105, 00:21:10.934 "qid": 0, 00:21:10.934 "state": "enabled", 00:21:10.934 "thread": "nvmf_tgt_poll_group_000", 00:21:10.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:10.934 "listen_address": { 00:21:10.934 "trtype": "TCP", 00:21:10.934 "adrfam": "IPv4", 00:21:10.934 "traddr": "10.0.0.2", 00:21:10.934 "trsvcid": "4420" 00:21:10.934 }, 00:21:10.934 "peer_address": { 00:21:10.934 "trtype": "TCP", 00:21:10.934 "adrfam": "IPv4", 00:21:10.934 "traddr": "10.0.0.1", 00:21:10.934 "trsvcid": "40952" 00:21:10.934 }, 00:21:10.934 "auth": { 00:21:10.934 "state": "completed", 00:21:10.934 "digest": "sha512", 00:21:10.934 "dhgroup": "ffdhe2048" 00:21:10.934 } 00:21:10.934 } 00:21:10.934 ]' 00:21:10.934 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.192 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.192 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.192 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:11.192 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.192 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.192 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.192 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.450 00:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:21:11.450 00:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:21:12.383 00:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.383 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.383 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.383 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.383 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.383 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.383 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:12.383 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:12.640 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:12.640 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.640 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:12.640 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:12.640 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:12.640 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.640 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.640 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.640 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.640 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.640 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.640 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.640 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.204 00:21:13.204 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.204 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.204 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.491 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.491 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.491 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.491 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.491 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.491 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.491 { 00:21:13.491 "cntlid": 107, 00:21:13.491 "qid": 0, 00:21:13.491 "state": "enabled", 00:21:13.491 "thread": "nvmf_tgt_poll_group_000", 00:21:13.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:13.491 "listen_address": { 00:21:13.491 "trtype": "TCP", 00:21:13.491 "adrfam": "IPv4", 00:21:13.491 "traddr": "10.0.0.2", 00:21:13.491 "trsvcid": "4420" 00:21:13.491 }, 00:21:13.491 "peer_address": { 00:21:13.491 "trtype": "TCP", 00:21:13.491 "adrfam": "IPv4", 00:21:13.491 "traddr": "10.0.0.1", 00:21:13.491 "trsvcid": "40966" 00:21:13.491 }, 00:21:13.491 "auth": { 00:21:13.491 "state": "completed", 00:21:13.491 "digest": "sha512", 00:21:13.491 "dhgroup": "ffdhe2048" 00:21:13.491 } 00:21:13.491 } 00:21:13.491 ]' 00:21:13.491 00:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.491 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.491 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.491 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:13.491 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.491 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.491 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.491 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.773 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:21:13.773 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:21:14.706 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.706 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.706 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.706 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.706 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.706 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.706 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.706 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.963 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:14.963 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.963 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.963 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:14.963 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:15.220 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.220 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.220 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.220 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.220 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.220 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.220 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.220 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.477 00:21:15.478 00:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.478 00:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.478 00:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.735 00:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.735 00:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.736 00:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.736 00:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.736 00:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.736 00:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.736 { 00:21:15.736 "cntlid": 109, 00:21:15.736 "qid": 0, 00:21:15.736 "state": "enabled", 00:21:15.736 "thread": "nvmf_tgt_poll_group_000", 00:21:15.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:15.736 "listen_address": { 00:21:15.736 "trtype": "TCP", 00:21:15.736 "adrfam": "IPv4", 00:21:15.736 "traddr": "10.0.0.2", 00:21:15.736 "trsvcid": "4420" 00:21:15.736 }, 00:21:15.736 "peer_address": { 00:21:15.736 "trtype": "TCP", 00:21:15.736 "adrfam": "IPv4", 00:21:15.736 "traddr": "10.0.0.1", 00:21:15.736 "trsvcid": "40982" 00:21:15.736 }, 00:21:15.736 "auth": { 00:21:15.736 "state": "completed", 00:21:15.736 "digest": "sha512", 00:21:15.736 "dhgroup": "ffdhe2048" 00:21:15.736 } 00:21:15.736 } 00:21:15.736 ]' 00:21:15.736 00:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.736 00:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.736 00:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.736 00:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:15.736 00:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.993 00:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.993 00:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.993 00:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.251 00:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:21:16.251 00:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:21:17.184 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.184 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.184 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.184 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.184 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.184 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.184 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.184 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.443 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:17.443 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.443 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.443 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:17.443 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:17.443 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.443 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:17.443 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.443 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.443 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.443 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:17.443 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.443 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.701 00:21:17.701 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.701 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.701 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.266 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.267 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.267 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.267 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.267 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.267 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.267 { 00:21:18.267 "cntlid": 111, 00:21:18.267 "qid": 0, 00:21:18.267 "state": "enabled", 00:21:18.267 "thread": "nvmf_tgt_poll_group_000", 00:21:18.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:18.267 "listen_address": { 00:21:18.267 "trtype": "TCP", 00:21:18.267 "adrfam": "IPv4", 00:21:18.267 "traddr": "10.0.0.2", 00:21:18.267 "trsvcid": "4420" 00:21:18.267 }, 00:21:18.267 "peer_address": { 00:21:18.267 "trtype": "TCP", 00:21:18.267 "adrfam": "IPv4", 00:21:18.267 "traddr": "10.0.0.1", 00:21:18.267 "trsvcid": "58646" 00:21:18.267 }, 00:21:18.267 "auth": { 00:21:18.267 "state": "completed", 00:21:18.267 "digest": "sha512", 00:21:18.267 "dhgroup": "ffdhe2048" 00:21:18.267 } 00:21:18.267 } 00:21:18.267 ]' 00:21:18.267 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.267 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.267 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.267 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:18.267 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.267 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.267 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.267 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.525 00:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:21:18.525 00:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:21:19.456 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.456 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.456 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.456 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.456 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.456 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.456 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.456 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:19.456 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:19.716 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:19.716 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.716 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:19.716 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:19.716 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:19.716 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.716 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.716 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.716 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.716 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.716 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.716 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.716 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.282 00:21:20.282 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.282 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.282 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.540 00:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.540 00:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.540 00:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.540 00:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.540 00:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.540 00:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.540 { 00:21:20.540 "cntlid": 113, 00:21:20.540 "qid": 0, 00:21:20.541 "state": "enabled", 00:21:20.541 "thread": "nvmf_tgt_poll_group_000", 00:21:20.541 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:20.541 "listen_address": { 00:21:20.541 "trtype": "TCP", 00:21:20.541 "adrfam": "IPv4", 00:21:20.541 "traddr": "10.0.0.2", 00:21:20.541 "trsvcid": "4420" 00:21:20.541 }, 00:21:20.541 "peer_address": { 00:21:20.541 "trtype": "TCP", 00:21:20.541 "adrfam": "IPv4", 00:21:20.541 "traddr": "10.0.0.1", 00:21:20.541 "trsvcid": "58670" 00:21:20.541 }, 00:21:20.541 "auth": { 00:21:20.541 "state": "completed", 00:21:20.541 "digest": "sha512", 00:21:20.541 "dhgroup": "ffdhe3072" 00:21:20.541 } 00:21:20.541 } 00:21:20.541 ]' 00:21:20.541 00:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.541 00:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.541 00:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.541 00:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:20.541 00:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.541 00:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.541 00:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.541 00:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.798 00:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:21:20.798 00:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:21:21.732 00:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.732 00:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.732 00:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.732 00:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.732 00:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.732 00:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.732 00:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:21.732 00:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:21.990 00:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:21.990 00:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.990 00:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.990 00:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:21.990 00:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:21.990 00:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.990 00:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.990 00:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.990 00:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.990 00:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.990 00:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.990 00:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.990 00:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.556 00:21:22.556 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.556 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.556 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.813 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.813 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.813 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.813 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.813 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.813 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.813 { 00:21:22.813 "cntlid": 115, 00:21:22.813 "qid": 0, 00:21:22.813 "state": "enabled", 00:21:22.813 "thread": "nvmf_tgt_poll_group_000", 00:21:22.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:22.813 "listen_address": { 00:21:22.813 "trtype": "TCP", 00:21:22.813 "adrfam": "IPv4", 00:21:22.813 "traddr": "10.0.0.2", 00:21:22.813 "trsvcid": "4420" 00:21:22.813 }, 00:21:22.813 "peer_address": { 00:21:22.813 "trtype": "TCP", 00:21:22.813 "adrfam": "IPv4", 00:21:22.813 "traddr": "10.0.0.1", 00:21:22.813 "trsvcid": "58698" 00:21:22.813 }, 00:21:22.813 "auth": { 00:21:22.813 "state": "completed", 00:21:22.813 "digest": "sha512", 00:21:22.813 "dhgroup": "ffdhe3072" 00:21:22.813 } 00:21:22.813 } 00:21:22.813 ]' 00:21:22.813 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.813 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.813 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.813 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:22.813 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.813 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.813 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.813 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.071 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:21:23.071 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:21:24.440 00:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.440 00:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.440 00:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.440 00:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.440 00:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.440 00:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.440 00:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.440 00:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.440 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:24.440 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.440 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.440 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:24.440 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:24.440 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.440 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.440 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.440 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.440 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.440 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.440 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.441 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.007 00:21:25.007 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.007 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.007 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.264 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.264 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.264 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.264 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.264 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.264 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.264 { 00:21:25.264 "cntlid": 117, 00:21:25.264 "qid": 0, 00:21:25.264 "state": "enabled", 00:21:25.264 "thread": "nvmf_tgt_poll_group_000", 00:21:25.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:25.264 "listen_address": { 00:21:25.264 "trtype": "TCP", 00:21:25.264 "adrfam": "IPv4", 00:21:25.264 "traddr": "10.0.0.2", 00:21:25.264 "trsvcid": "4420" 00:21:25.264 }, 00:21:25.264 "peer_address": { 00:21:25.264 "trtype": "TCP", 00:21:25.264 "adrfam": "IPv4", 00:21:25.264 "traddr": "10.0.0.1", 00:21:25.264 "trsvcid": "58740" 00:21:25.264 }, 00:21:25.264 "auth": { 00:21:25.264 "state": "completed", 00:21:25.264 "digest": "sha512", 00:21:25.264 "dhgroup": "ffdhe3072" 00:21:25.264 } 00:21:25.264 } 00:21:25.264 ]' 00:21:25.264 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.264 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.264 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.264 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:25.264 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.264 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.264 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.264 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.522 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:21:25.522 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:21:26.457 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.457 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.457 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.457 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.457 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.457 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.457 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.457 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.716 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:26.716 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.716 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.716 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:26.716 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:26.716 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.716 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:26.716 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.716 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.716 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.716 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:26.716 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.716 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.282 00:21:27.282 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.282 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.282 00:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.540 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.540 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.540 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.540 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.540 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.540 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.540 { 00:21:27.540 "cntlid": 119, 00:21:27.540 "qid": 0, 00:21:27.540 "state": "enabled", 00:21:27.540 "thread": "nvmf_tgt_poll_group_000", 00:21:27.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:27.540 "listen_address": { 00:21:27.540 "trtype": "TCP", 00:21:27.540 "adrfam": "IPv4", 00:21:27.540 "traddr": "10.0.0.2", 00:21:27.540 "trsvcid": "4420" 00:21:27.540 }, 00:21:27.540 "peer_address": { 00:21:27.540 "trtype": "TCP", 00:21:27.541 "adrfam": "IPv4", 00:21:27.541 "traddr": "10.0.0.1", 00:21:27.541 "trsvcid": "58776" 00:21:27.541 }, 00:21:27.541 "auth": { 00:21:27.541 "state": "completed", 00:21:27.541 "digest": "sha512", 00:21:27.541 "dhgroup": "ffdhe3072" 00:21:27.541 } 00:21:27.541 } 00:21:27.541 ]' 00:21:27.541 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.541 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.541 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.541 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:27.541 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.541 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.541 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.541 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.799 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:21:27.799 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:21:28.733 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.991 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.991 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.991 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.991 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.991 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.991 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.991 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.991 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:29.250 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:29.250 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.250 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:29.250 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:29.250 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:29.250 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.250 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.250 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.250 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.250 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.250 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.250 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.250 00:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.508 00:21:29.508 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.508 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.508 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.766 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.766 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.766 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.766 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.766 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.766 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.766 { 00:21:29.766 "cntlid": 121, 00:21:29.766 "qid": 0, 00:21:29.766 "state": "enabled", 00:21:29.766 "thread": "nvmf_tgt_poll_group_000", 00:21:29.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:29.766 "listen_address": { 00:21:29.766 "trtype": "TCP", 00:21:29.766 "adrfam": "IPv4", 00:21:29.766 "traddr": "10.0.0.2", 00:21:29.766 "trsvcid": "4420" 00:21:29.766 }, 00:21:29.766 "peer_address": { 00:21:29.766 "trtype": "TCP", 00:21:29.766 "adrfam": "IPv4", 00:21:29.766 "traddr": "10.0.0.1", 00:21:29.766 "trsvcid": "56318" 00:21:29.766 }, 00:21:29.766 "auth": { 00:21:29.766 "state": "completed", 00:21:29.766 "digest": "sha512", 00:21:29.766 "dhgroup": "ffdhe4096" 00:21:29.766 } 00:21:29.766 } 00:21:29.766 ]' 00:21:29.766 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.766 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.766 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.023 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:30.023 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.023 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.024 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.024 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.281 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:21:30.281 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:21:31.211 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.211 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.211 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.211 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.211 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.211 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.211 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:31.211 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:31.470 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:31.470 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.470 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.470 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:31.470 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:31.470 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.470 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.470 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.470 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.470 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.470 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.470 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.470 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.036 00:21:32.036 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.036 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.036 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.293 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.293 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.293 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.293 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.293 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.293 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.293 { 00:21:32.293 "cntlid": 123, 00:21:32.293 "qid": 0, 00:21:32.293 "state": "enabled", 00:21:32.293 "thread": "nvmf_tgt_poll_group_000", 00:21:32.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:32.293 "listen_address": { 00:21:32.293 "trtype": "TCP", 00:21:32.293 "adrfam": "IPv4", 00:21:32.293 "traddr": "10.0.0.2", 00:21:32.293 "trsvcid": "4420" 00:21:32.293 }, 00:21:32.293 "peer_address": { 00:21:32.293 "trtype": "TCP", 00:21:32.293 "adrfam": "IPv4", 00:21:32.293 "traddr": "10.0.0.1", 00:21:32.293 "trsvcid": "56340" 00:21:32.293 }, 00:21:32.293 "auth": { 00:21:32.293 "state": "completed", 00:21:32.293 "digest": "sha512", 00:21:32.293 "dhgroup": "ffdhe4096" 00:21:32.293 } 00:21:32.293 } 00:21:32.293 ]' 00:21:32.293 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.293 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.293 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.293 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:32.293 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.293 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.293 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.293 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.550 00:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:21:32.550 00:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:21:33.481 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.481 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.481 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.481 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.481 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.481 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.481 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.481 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:34.043 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:34.043 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.043 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.043 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:34.043 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:34.043 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.043 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.043 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.043 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.043 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.043 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.043 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.043 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.299 00:21:34.299 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.299 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.299 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.556 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.556 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.556 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.556 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.556 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.556 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.556 { 00:21:34.556 "cntlid": 125, 00:21:34.556 "qid": 0, 00:21:34.556 "state": "enabled", 00:21:34.556 "thread": "nvmf_tgt_poll_group_000", 00:21:34.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:34.556 "listen_address": { 00:21:34.556 "trtype": "TCP", 00:21:34.556 "adrfam": "IPv4", 00:21:34.556 "traddr": "10.0.0.2", 00:21:34.556 "trsvcid": "4420" 00:21:34.556 }, 00:21:34.556 "peer_address": { 00:21:34.556 "trtype": "TCP", 00:21:34.556 "adrfam": "IPv4", 00:21:34.556 "traddr": "10.0.0.1", 00:21:34.556 "trsvcid": "56364" 00:21:34.556 }, 00:21:34.556 "auth": { 00:21:34.556 "state": "completed", 00:21:34.556 "digest": "sha512", 00:21:34.556 "dhgroup": "ffdhe4096" 00:21:34.556 } 00:21:34.556 } 00:21:34.556 ]' 00:21:34.556 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.556 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.556 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.556 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:34.556 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.556 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.556 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.556 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.121 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:21:35.121 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:21:36.055 00:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.055 00:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.055 00:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.055 00:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.055 00:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.055 00:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.055 00:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:36.056 00:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:36.314 00:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:36.314 00:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.314 00:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.314 00:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:36.314 00:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:36.314 00:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.314 00:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:36.314 00:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.314 00:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.314 00:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.314 00:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:36.314 00:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.314 00:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.572 00:21:36.572 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.572 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.572 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.831 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.831 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.831 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.831 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.831 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.831 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.831 { 00:21:36.831 "cntlid": 127, 00:21:36.831 "qid": 0, 00:21:36.831 "state": "enabled", 00:21:36.831 "thread": "nvmf_tgt_poll_group_000", 00:21:36.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:36.831 "listen_address": { 00:21:36.831 "trtype": "TCP", 00:21:36.831 "adrfam": "IPv4", 00:21:36.831 "traddr": "10.0.0.2", 00:21:36.831 "trsvcid": "4420" 00:21:36.831 }, 00:21:36.831 "peer_address": { 00:21:36.831 "trtype": "TCP", 00:21:36.831 "adrfam": "IPv4", 00:21:36.831 "traddr": "10.0.0.1", 00:21:36.831 "trsvcid": "56398" 00:21:36.831 }, 00:21:36.831 "auth": { 00:21:36.831 "state": "completed", 00:21:36.831 "digest": "sha512", 00:21:36.831 "dhgroup": "ffdhe4096" 00:21:36.831 } 00:21:36.831 } 00:21:36.831 ]' 00:21:36.831 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.090 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.090 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.090 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:37.090 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.090 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.090 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.090 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.348 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:21:37.348 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:21:38.282 00:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.282 00:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.282 00:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.282 00:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.282 00:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.282 00:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:38.282 00:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.282 00:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:38.282 00:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:38.540 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:38.541 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.541 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.541 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:38.541 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:38.541 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.541 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.541 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.541 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.541 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.541 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.541 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.541 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.107 00:21:39.107 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.107 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.107 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.366 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.366 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.366 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.366 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.366 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.366 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.366 { 00:21:39.366 "cntlid": 129, 00:21:39.366 "qid": 0, 00:21:39.366 "state": "enabled", 00:21:39.366 "thread": "nvmf_tgt_poll_group_000", 00:21:39.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:39.366 "listen_address": { 00:21:39.366 "trtype": "TCP", 00:21:39.366 "adrfam": "IPv4", 00:21:39.366 "traddr": "10.0.0.2", 00:21:39.366 "trsvcid": "4420" 00:21:39.366 }, 00:21:39.366 "peer_address": { 00:21:39.366 "trtype": "TCP", 00:21:39.366 "adrfam": "IPv4", 00:21:39.366 "traddr": "10.0.0.1", 00:21:39.366 "trsvcid": "44686" 00:21:39.366 }, 00:21:39.366 "auth": { 00:21:39.366 "state": "completed", 00:21:39.366 "digest": "sha512", 00:21:39.366 "dhgroup": "ffdhe6144" 00:21:39.366 } 00:21:39.366 } 00:21:39.366 ]' 00:21:39.366 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.366 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.366 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.624 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:39.624 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.624 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.624 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.624 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.883 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:21:39.883 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:21:40.816 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.816 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.817 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.817 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.817 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.817 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.817 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.817 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:41.380 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:41.381 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.381 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.381 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:41.381 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:41.381 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.381 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.381 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.381 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.381 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.381 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.381 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.381 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.945 00:21:41.945 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.945 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.945 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.203 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.203 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.203 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.203 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.203 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.203 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.203 { 00:21:42.203 "cntlid": 131, 00:21:42.203 "qid": 0, 00:21:42.203 "state": "enabled", 00:21:42.203 "thread": "nvmf_tgt_poll_group_000", 00:21:42.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:42.203 "listen_address": { 00:21:42.203 "trtype": "TCP", 00:21:42.203 "adrfam": "IPv4", 00:21:42.203 "traddr": "10.0.0.2", 00:21:42.203 "trsvcid": "4420" 00:21:42.203 }, 00:21:42.203 "peer_address": { 00:21:42.203 "trtype": "TCP", 00:21:42.203 "adrfam": "IPv4", 00:21:42.203 "traddr": "10.0.0.1", 00:21:42.203 "trsvcid": "44718" 00:21:42.203 }, 00:21:42.203 "auth": { 00:21:42.203 "state": "completed", 00:21:42.203 "digest": "sha512", 00:21:42.203 "dhgroup": "ffdhe6144" 00:21:42.203 } 00:21:42.203 } 00:21:42.203 ]' 00:21:42.203 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.203 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.203 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.203 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:42.203 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.203 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.203 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.203 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.461 00:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:21:42.461 00:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:21:43.466 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.466 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.466 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.466 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.466 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.466 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.466 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:43.466 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:43.725 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:43.725 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.725 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.725 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:43.725 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:43.725 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.725 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.725 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.725 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.725 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.725 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.725 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.725 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.658 00:21:44.658 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.658 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.658 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.658 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.658 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.658 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.658 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.658 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.658 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.658 { 00:21:44.658 "cntlid": 133, 00:21:44.658 "qid": 0, 00:21:44.658 "state": "enabled", 00:21:44.658 "thread": "nvmf_tgt_poll_group_000", 00:21:44.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:44.658 "listen_address": { 00:21:44.658 "trtype": "TCP", 00:21:44.658 "adrfam": "IPv4", 00:21:44.658 "traddr": "10.0.0.2", 00:21:44.658 "trsvcid": "4420" 00:21:44.658 }, 00:21:44.658 "peer_address": { 00:21:44.658 "trtype": "TCP", 00:21:44.658 "adrfam": "IPv4", 00:21:44.658 "traddr": "10.0.0.1", 00:21:44.658 "trsvcid": "44746" 00:21:44.658 }, 00:21:44.658 "auth": { 00:21:44.658 "state": "completed", 00:21:44.658 "digest": "sha512", 00:21:44.658 "dhgroup": "ffdhe6144" 00:21:44.658 } 00:21:44.658 } 00:21:44.658 ]' 00:21:44.658 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.658 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.658 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.916 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:44.916 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.916 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.916 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.916 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.173 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:21:45.173 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:21:46.106 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.106 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.106 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.106 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.106 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.106 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.106 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.106 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.364 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:46.364 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.364 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.364 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:46.364 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:46.364 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.364 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:46.364 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.364 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.364 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.364 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:46.364 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.364 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.930 00:21:46.930 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.930 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.930 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.188 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.188 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.188 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.188 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.188 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.188 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.188 { 00:21:47.188 "cntlid": 135, 00:21:47.188 "qid": 0, 00:21:47.188 "state": "enabled", 00:21:47.188 "thread": "nvmf_tgt_poll_group_000", 00:21:47.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:47.188 "listen_address": { 00:21:47.188 "trtype": "TCP", 00:21:47.188 "adrfam": "IPv4", 00:21:47.188 "traddr": "10.0.0.2", 00:21:47.188 "trsvcid": "4420" 00:21:47.188 }, 00:21:47.188 "peer_address": { 00:21:47.188 "trtype": "TCP", 00:21:47.188 "adrfam": "IPv4", 00:21:47.188 "traddr": "10.0.0.1", 00:21:47.188 "trsvcid": "44756" 00:21:47.188 }, 00:21:47.188 "auth": { 00:21:47.188 "state": "completed", 00:21:47.188 "digest": "sha512", 00:21:47.188 "dhgroup": "ffdhe6144" 00:21:47.188 } 00:21:47.188 } 00:21:47.188 ]' 00:21:47.188 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.188 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.188 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.188 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:47.188 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.446 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.446 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.446 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.704 00:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:21:47.704 00:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:21:48.638 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.638 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.638 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.638 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.638 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.638 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:48.638 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.638 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.638 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.895 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:48.895 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.895 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.895 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:48.895 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:48.895 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.895 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.895 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.895 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.895 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.895 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.895 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.895 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.829 00:21:49.829 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.829 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.829 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.087 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.087 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.087 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.087 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.087 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.087 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.087 { 00:21:50.087 "cntlid": 137, 00:21:50.087 "qid": 0, 00:21:50.087 "state": "enabled", 00:21:50.087 "thread": "nvmf_tgt_poll_group_000", 00:21:50.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:50.087 "listen_address": { 00:21:50.087 "trtype": "TCP", 00:21:50.087 "adrfam": "IPv4", 00:21:50.087 "traddr": "10.0.0.2", 00:21:50.087 "trsvcid": "4420" 00:21:50.087 }, 00:21:50.087 "peer_address": { 00:21:50.087 "trtype": "TCP", 00:21:50.087 "adrfam": "IPv4", 00:21:50.087 "traddr": "10.0.0.1", 00:21:50.087 "trsvcid": "49726" 00:21:50.087 }, 00:21:50.087 "auth": { 00:21:50.087 "state": "completed", 00:21:50.087 "digest": "sha512", 00:21:50.087 "dhgroup": "ffdhe8192" 00:21:50.087 } 00:21:50.087 } 00:21:50.087 ]' 00:21:50.087 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.088 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.088 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.346 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:50.346 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.346 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.346 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.346 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.604 00:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:21:50.604 00:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:21:51.537 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.537 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.537 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.537 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.795 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.795 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.795 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.795 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:52.053 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:52.054 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.054 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:52.054 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:52.054 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:52.054 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.054 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.054 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.054 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.054 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.054 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.054 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.054 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.987 00:21:52.987 00:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.987 00:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.987 00:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.244 00:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.244 00:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.244 00:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.244 00:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.244 00:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.245 00:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.245 { 00:21:53.245 "cntlid": 139, 00:21:53.245 "qid": 0, 00:21:53.245 "state": "enabled", 00:21:53.245 "thread": "nvmf_tgt_poll_group_000", 00:21:53.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:53.245 "listen_address": { 00:21:53.245 "trtype": "TCP", 00:21:53.245 "adrfam": "IPv4", 00:21:53.245 "traddr": "10.0.0.2", 00:21:53.245 "trsvcid": "4420" 00:21:53.245 }, 00:21:53.245 "peer_address": { 00:21:53.245 "trtype": "TCP", 00:21:53.245 "adrfam": "IPv4", 00:21:53.245 "traddr": "10.0.0.1", 00:21:53.245 "trsvcid": "49738" 00:21:53.245 }, 00:21:53.245 "auth": { 00:21:53.245 "state": "completed", 00:21:53.245 "digest": "sha512", 00:21:53.245 "dhgroup": "ffdhe8192" 00:21:53.245 } 00:21:53.245 } 00:21:53.245 ]' 00:21:53.245 00:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.245 00:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.245 00:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.245 00:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:53.245 00:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.245 00:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.245 00:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.245 00:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.502 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:21:53.502 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: --dhchap-ctrl-secret DHHC-1:02:NmQzZDZhNWExMTgwOThlOTQxODY3N2Y2M2EyM2MzZGFmZWQ0YTcxMjg2MjEzNzkwNV1u7g==: 00:21:54.433 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.433 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.433 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.433 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.433 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.433 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.433 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.433 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.691 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:54.691 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.691 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:54.691 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:54.691 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:54.691 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.691 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.691 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.691 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.691 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.691 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.691 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.691 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.624 00:21:55.883 00:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.883 00:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.883 00:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.142 00:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.142 00:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.142 00:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.142 00:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.142 00:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.142 00:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.142 { 00:21:56.142 "cntlid": 141, 00:21:56.142 "qid": 0, 00:21:56.142 "state": "enabled", 00:21:56.142 "thread": "nvmf_tgt_poll_group_000", 00:21:56.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:56.142 "listen_address": { 00:21:56.142 "trtype": "TCP", 00:21:56.142 "adrfam": "IPv4", 00:21:56.142 "traddr": "10.0.0.2", 00:21:56.142 "trsvcid": "4420" 00:21:56.142 }, 00:21:56.142 "peer_address": { 00:21:56.142 "trtype": "TCP", 00:21:56.142 "adrfam": "IPv4", 00:21:56.142 "traddr": "10.0.0.1", 00:21:56.142 "trsvcid": "49760" 00:21:56.142 }, 00:21:56.142 "auth": { 00:21:56.142 "state": "completed", 00:21:56.142 "digest": "sha512", 00:21:56.142 "dhgroup": "ffdhe8192" 00:21:56.142 } 00:21:56.142 } 00:21:56.142 ]' 00:21:56.142 00:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.142 00:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.142 00:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.142 00:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:56.142 00:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.142 00:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.142 00:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.142 00:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.401 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:21:56.401 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:01:MDAwMzlhNTRhZDE3ZWE3YTNhNjJhYjc0OTUyNjlmY2YFyuVj: 00:21:57.335 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.335 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.335 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.335 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.335 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.335 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.335 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.335 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.593 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:57.593 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.593 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.593 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:57.593 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:57.593 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.593 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:57.593 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.593 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.852 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.852 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:57.852 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.852 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:58.786 00:21:58.786 00:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.786 00:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.786 00:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.786 00:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.044 00:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.044 00:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.044 00:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.044 00:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.044 00:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.044 { 00:21:59.044 "cntlid": 143, 00:21:59.044 "qid": 0, 00:21:59.044 "state": "enabled", 00:21:59.044 "thread": "nvmf_tgt_poll_group_000", 00:21:59.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:59.044 "listen_address": { 00:21:59.044 "trtype": "TCP", 00:21:59.044 "adrfam": "IPv4", 00:21:59.044 "traddr": "10.0.0.2", 00:21:59.044 "trsvcid": "4420" 00:21:59.044 }, 00:21:59.044 "peer_address": { 00:21:59.044 "trtype": "TCP", 00:21:59.044 "adrfam": "IPv4", 00:21:59.044 "traddr": "10.0.0.1", 00:21:59.044 "trsvcid": "53346" 00:21:59.044 }, 00:21:59.044 "auth": { 00:21:59.044 "state": "completed", 00:21:59.044 "digest": "sha512", 00:21:59.044 "dhgroup": "ffdhe8192" 00:21:59.044 } 00:21:59.044 } 00:21:59.044 ]' 00:21:59.044 00:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.044 00:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.044 00:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.044 00:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:59.044 00:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.044 00:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.044 00:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.044 00:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.301 00:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:21:59.301 00:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:22:00.230 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.230 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.230 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.230 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.230 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.230 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:00.230 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:00.487 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:00.487 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:00.487 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:00.487 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:00.744 00:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:00.744 00:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.744 00:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:00.744 00:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:00.744 00:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:00.744 00:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.744 00:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.744 00:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.744 00:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.744 00:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.744 00:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.744 00:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.744 00:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.673 00:22:01.673 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.673 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.673 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.930 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.930 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.930 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.930 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.930 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.930 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.930 { 00:22:01.930 "cntlid": 145, 00:22:01.930 "qid": 0, 00:22:01.930 "state": "enabled", 00:22:01.930 "thread": "nvmf_tgt_poll_group_000", 00:22:01.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:01.930 "listen_address": { 00:22:01.930 "trtype": "TCP", 00:22:01.930 "adrfam": "IPv4", 00:22:01.930 "traddr": "10.0.0.2", 00:22:01.930 "trsvcid": "4420" 00:22:01.930 }, 00:22:01.930 "peer_address": { 00:22:01.931 "trtype": "TCP", 00:22:01.931 "adrfam": "IPv4", 00:22:01.931 "traddr": "10.0.0.1", 00:22:01.931 "trsvcid": "53376" 00:22:01.931 }, 00:22:01.931 "auth": { 00:22:01.931 "state": "completed", 00:22:01.931 "digest": "sha512", 00:22:01.931 "dhgroup": "ffdhe8192" 00:22:01.931 } 00:22:01.931 } 00:22:01.931 ]' 00:22:01.931 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.931 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.931 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.931 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:01.931 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.188 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.188 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.188 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.447 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:22:02.447 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmU5MjFhMDA3NjhhNjdlNGFjN2QxYWVmMDRmZjhhYjdmNzAwOGUwMjI0MmVlM2M07bZ6cg==: --dhchap-ctrl-secret DHHC-1:03:YzdiZWM3MjUwZjQ0NjQ2ZDIwMmJlZTE0NzJhYzdiZjY1NzBhYzUwMWJiNzMxOTdjY2Y4YTk2MTkxM2M4YmMwZW02nTI=: 00:22:03.375 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.375 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.375 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.375 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.375 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.376 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:03.376 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.376 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.376 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.376 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:03.376 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:03.376 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:03.376 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:03.376 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.376 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:03.376 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.376 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:03.376 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:03.376 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:04.308 request: 00:22:04.308 { 00:22:04.308 "name": "nvme0", 00:22:04.308 "trtype": "tcp", 00:22:04.308 "traddr": "10.0.0.2", 00:22:04.308 "adrfam": "ipv4", 00:22:04.308 "trsvcid": "4420", 00:22:04.308 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:04.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:04.308 "prchk_reftag": false, 00:22:04.308 "prchk_guard": false, 00:22:04.308 "hdgst": false, 00:22:04.308 "ddgst": false, 00:22:04.308 "dhchap_key": "key2", 00:22:04.308 "allow_unrecognized_csi": false, 00:22:04.308 "method": "bdev_nvme_attach_controller", 00:22:04.308 "req_id": 1 00:22:04.308 } 00:22:04.308 Got JSON-RPC error response 00:22:04.308 response: 00:22:04.308 { 00:22:04.308 "code": -5, 00:22:04.308 "message": "Input/output error" 00:22:04.308 } 00:22:04.308 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:04.308 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:04.308 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:04.308 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:04.308 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.308 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.308 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.308 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.308 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.308 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.308 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.308 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.308 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:04.308 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:04.308 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:04.308 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:04.308 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.308 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:04.308 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.309 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:04.309 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:04.309 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.241 request: 00:22:05.241 { 00:22:05.241 "name": "nvme0", 00:22:05.241 "trtype": "tcp", 00:22:05.241 "traddr": "10.0.0.2", 00:22:05.241 "adrfam": "ipv4", 00:22:05.241 "trsvcid": "4420", 00:22:05.241 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:05.241 "prchk_reftag": false, 00:22:05.241 "prchk_guard": false, 00:22:05.241 "hdgst": false, 00:22:05.241 "ddgst": false, 00:22:05.241 "dhchap_key": "key1", 00:22:05.241 "dhchap_ctrlr_key": "ckey2", 00:22:05.241 "allow_unrecognized_csi": false, 00:22:05.241 "method": "bdev_nvme_attach_controller", 00:22:05.241 "req_id": 1 00:22:05.241 } 00:22:05.241 Got JSON-RPC error response 00:22:05.241 response: 00:22:05.241 { 00:22:05.241 "code": -5, 00:22:05.241 "message": "Input/output error" 00:22:05.241 } 00:22:05.241 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:05.241 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:05.241 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:05.241 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:05.241 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.241 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.241 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.241 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.241 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:05.241 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.241 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.242 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.242 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.242 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:05.242 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.242 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:05.242 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.242 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:05.242 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.242 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.242 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.242 00:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.176 request: 00:22:06.176 { 00:22:06.176 "name": "nvme0", 00:22:06.176 "trtype": "tcp", 00:22:06.176 "traddr": "10.0.0.2", 00:22:06.176 "adrfam": "ipv4", 00:22:06.176 "trsvcid": "4420", 00:22:06.176 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:06.176 "prchk_reftag": false, 00:22:06.176 "prchk_guard": false, 00:22:06.176 "hdgst": false, 00:22:06.176 "ddgst": false, 00:22:06.176 "dhchap_key": "key1", 00:22:06.176 "dhchap_ctrlr_key": "ckey1", 00:22:06.176 "allow_unrecognized_csi": false, 00:22:06.176 "method": "bdev_nvme_attach_controller", 00:22:06.176 "req_id": 1 00:22:06.176 } 00:22:06.176 Got JSON-RPC error response 00:22:06.176 response: 00:22:06.176 { 00:22:06.176 "code": -5, 00:22:06.176 "message": "Input/output error" 00:22:06.176 } 00:22:06.176 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:06.176 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:06.176 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:06.176 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:06.176 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.176 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.176 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.176 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.176 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2963008 00:22:06.176 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2963008 ']' 00:22:06.176 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2963008 00:22:06.176 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:06.176 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:06.176 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2963008 00:22:06.176 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:06.176 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:06.176 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2963008' 00:22:06.176 killing process with pid 2963008 00:22:06.176 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2963008 00:22:06.176 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2963008 00:22:07.112 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:07.112 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:07.112 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:07.112 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.112 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2986535 00:22:07.112 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:07.112 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2986535 00:22:07.112 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2986535 ']' 00:22:07.112 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.112 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:07.112 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.112 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:07.112 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.487 00:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.487 00:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:08.487 00:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:08.487 00:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:08.487 00:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.487 00:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.487 00:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:08.487 00:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2986535 00:22:08.487 00:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2986535 ']' 00:22:08.487 00:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.487 00:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.487 00:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.487 00:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.487 00:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.745 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.745 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:08.745 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:08.745 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.745 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.004 null0 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Fpo 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.28p ]] 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.28p 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.E9P 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.cuC ]] 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cuC 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.qHr 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.jpR ]] 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jpR 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.isS 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.005 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.906 nvme0n1 00:22:10.906 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.906 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.906 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.906 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.906 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.906 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.906 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.906 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.906 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.906 { 00:22:10.906 "cntlid": 1, 00:22:10.906 "qid": 0, 00:22:10.906 "state": "enabled", 00:22:10.906 "thread": "nvmf_tgt_poll_group_000", 00:22:10.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:10.906 "listen_address": { 00:22:10.906 "trtype": "TCP", 00:22:10.906 "adrfam": "IPv4", 00:22:10.906 "traddr": "10.0.0.2", 00:22:10.906 "trsvcid": "4420" 00:22:10.906 }, 00:22:10.906 "peer_address": { 00:22:10.906 "trtype": "TCP", 00:22:10.906 "adrfam": "IPv4", 00:22:10.906 "traddr": "10.0.0.1", 00:22:10.906 "trsvcid": "53688" 00:22:10.906 }, 00:22:10.906 "auth": { 00:22:10.906 "state": "completed", 00:22:10.906 "digest": "sha512", 00:22:10.906 "dhgroup": "ffdhe8192" 00:22:10.906 } 00:22:10.906 } 00:22:10.906 ]' 00:22:10.906 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.906 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.906 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.163 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:11.163 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.163 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.163 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.163 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.420 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:22:11.421 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:22:12.353 00:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.353 00:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.353 00:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.353 00:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.353 00:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.353 00:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:12.353 00:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.353 00:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.353 00:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.353 00:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:12.353 00:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:12.611 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:12.611 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:12.611 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:12.611 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:12.611 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.611 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:12.611 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.611 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:12.611 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.611 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.867 request: 00:22:12.867 { 00:22:12.867 "name": "nvme0", 00:22:12.867 "trtype": "tcp", 00:22:12.867 "traddr": "10.0.0.2", 00:22:12.867 "adrfam": "ipv4", 00:22:12.867 "trsvcid": "4420", 00:22:12.867 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:12.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:12.867 "prchk_reftag": false, 00:22:12.867 "prchk_guard": false, 00:22:12.867 "hdgst": false, 00:22:12.867 "ddgst": false, 00:22:12.867 "dhchap_key": "key3", 00:22:12.867 "allow_unrecognized_csi": false, 00:22:12.867 "method": "bdev_nvme_attach_controller", 00:22:12.867 "req_id": 1 00:22:12.867 } 00:22:12.867 Got JSON-RPC error response 00:22:12.867 response: 00:22:12.867 { 00:22:12.867 "code": -5, 00:22:12.867 "message": "Input/output error" 00:22:12.867 } 00:22:12.867 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:12.867 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:12.867 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:12.867 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:12.867 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:12.867 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:12.867 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:12.867 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:13.124 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:13.124 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:13.124 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:13.124 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:13.124 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.124 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:13.124 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.124 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:13.124 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:13.124 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:13.381 request: 00:22:13.381 { 00:22:13.381 "name": "nvme0", 00:22:13.381 "trtype": "tcp", 00:22:13.381 "traddr": "10.0.0.2", 00:22:13.381 "adrfam": "ipv4", 00:22:13.381 "trsvcid": "4420", 00:22:13.381 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:13.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:13.381 "prchk_reftag": false, 00:22:13.381 "prchk_guard": false, 00:22:13.381 "hdgst": false, 00:22:13.381 "ddgst": false, 00:22:13.381 "dhchap_key": "key3", 00:22:13.381 "allow_unrecognized_csi": false, 00:22:13.381 "method": "bdev_nvme_attach_controller", 00:22:13.381 "req_id": 1 00:22:13.381 } 00:22:13.381 Got JSON-RPC error response 00:22:13.381 response: 00:22:13.381 { 00:22:13.381 "code": -5, 00:22:13.381 "message": "Input/output error" 00:22:13.381 } 00:22:13.381 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:13.381 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:13.381 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:13.381 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:13.381 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:13.381 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:13.381 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:13.381 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:13.381 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:13.381 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:13.638 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.638 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.638 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.638 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.638 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.638 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.638 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.638 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.638 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:13.638 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:13.638 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:13.638 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:13.639 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.639 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:13.639 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.639 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:13.639 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:13.639 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:14.241 request: 00:22:14.241 { 00:22:14.241 "name": "nvme0", 00:22:14.241 "trtype": "tcp", 00:22:14.241 "traddr": "10.0.0.2", 00:22:14.241 "adrfam": "ipv4", 00:22:14.241 "trsvcid": "4420", 00:22:14.241 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:14.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:14.241 "prchk_reftag": false, 00:22:14.241 "prchk_guard": false, 00:22:14.241 "hdgst": false, 00:22:14.241 "ddgst": false, 00:22:14.241 "dhchap_key": "key0", 00:22:14.241 "dhchap_ctrlr_key": "key1", 00:22:14.241 "allow_unrecognized_csi": false, 00:22:14.241 "method": "bdev_nvme_attach_controller", 00:22:14.241 "req_id": 1 00:22:14.241 } 00:22:14.241 Got JSON-RPC error response 00:22:14.241 response: 00:22:14.241 { 00:22:14.241 "code": -5, 00:22:14.241 "message": "Input/output error" 00:22:14.241 } 00:22:14.241 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:14.241 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:14.241 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:14.241 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:14.241 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:14.241 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:14.241 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:14.835 nvme0n1 00:22:14.835 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:14.835 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:14.835 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.835 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.835 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.835 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.401 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:15.401 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.401 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.401 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.401 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:15.401 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:15.401 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:16.775 nvme0n1 00:22:16.775 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:16.775 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.775 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:17.033 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.033 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:17.033 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.033 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.033 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.033 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:17.033 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:17.033 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.291 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.291 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:22:17.291 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: --dhchap-ctrl-secret DHHC-1:03:MjMwMjQxNWUxMWZkMTgxZmUxMzU2Y2U5N2IyZmM3MTA3YWEyYzcwMThlODJlNTBmNGZlZGQ4ZTgxZDE5MzYyY9ngkKk=: 00:22:18.226 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:18.226 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:18.226 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:18.226 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:18.226 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:18.226 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:18.226 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:18.226 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.226 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.484 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:18.484 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:18.484 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:18.484 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:18.484 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:18.484 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:18.484 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:18.484 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:18.484 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:18.484 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:19.418 request: 00:22:19.418 { 00:22:19.418 "name": "nvme0", 00:22:19.418 "trtype": "tcp", 00:22:19.418 "traddr": "10.0.0.2", 00:22:19.418 "adrfam": "ipv4", 00:22:19.418 "trsvcid": "4420", 00:22:19.418 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:19.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:19.418 "prchk_reftag": false, 00:22:19.418 "prchk_guard": false, 00:22:19.418 "hdgst": false, 00:22:19.418 "ddgst": false, 00:22:19.418 "dhchap_key": "key1", 00:22:19.418 "allow_unrecognized_csi": false, 00:22:19.418 "method": "bdev_nvme_attach_controller", 00:22:19.418 "req_id": 1 00:22:19.418 } 00:22:19.418 Got JSON-RPC error response 00:22:19.418 response: 00:22:19.418 { 00:22:19.418 "code": -5, 00:22:19.418 "message": "Input/output error" 00:22:19.418 } 00:22:19.418 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:19.418 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:19.418 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:19.418 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:19.418 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:19.418 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:19.418 00:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:20.794 nvme0n1 00:22:20.794 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:20.794 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:20.794 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.051 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.051 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.051 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.309 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.309 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.309 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.309 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.309 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:21.309 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:21.309 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:21.875 nvme0n1 00:22:21.875 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:21.875 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.875 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:22.133 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.133 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.133 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.391 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:22.391 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.391 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.391 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.391 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: '' 2s 00:22:22.391 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:22.391 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:22.391 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: 00:22:22.391 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:22.391 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:22.391 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:22.391 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: ]] 00:22:22.391 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Y2M1MTdhNjljM2UzMGVkOWM5YmE0NTE5MzZjNTdkZmU/iWjd: 00:22:22.391 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:22.391 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:22.391 00:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: 2s 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: ]] 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NDNmNTE3YjZjZjJlYjI0NTQ4MmE3MzU1Y2E2MzAzNTA2MjAwMmQzZWY5NDZhMmVlbtmSCQ==: 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:24.288 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:26.814 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:26.814 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:26.815 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:26.815 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:26.815 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:26.815 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:26.815 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:26.815 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.815 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:26.815 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.815 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.815 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.815 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:26.815 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:26.815 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:28.186 nvme0n1 00:22:28.186 01:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:28.186 01:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.186 01:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.186 01:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.186 01:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:28.186 01:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:28.752 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:28.752 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:28.752 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.009 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.010 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:29.010 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.010 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.010 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.010 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:29.010 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:29.575 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:29.575 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.575 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:29.575 01:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.575 01:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:29.575 01:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.575 01:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.575 01:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.575 01:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:29.575 01:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:29.575 01:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:29.575 01:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:29.575 01:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.575 01:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:29.575 01:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.575 01:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:29.575 01:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:30.508 request: 00:22:30.508 { 00:22:30.508 "name": "nvme0", 00:22:30.508 "dhchap_key": "key1", 00:22:30.508 "dhchap_ctrlr_key": "key3", 00:22:30.508 "method": "bdev_nvme_set_keys", 00:22:30.508 "req_id": 1 00:22:30.508 } 00:22:30.508 Got JSON-RPC error response 00:22:30.508 response: 00:22:30.508 { 00:22:30.508 "code": -13, 00:22:30.508 "message": "Permission denied" 00:22:30.508 } 00:22:30.508 01:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:30.508 01:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:30.508 01:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:30.508 01:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:30.508 01:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:30.508 01:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.508 01:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:31.075 01:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:31.075 01:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:32.013 01:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:32.013 01:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:32.013 01:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.272 01:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:32.272 01:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:32.272 01:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.272 01:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.272 01:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.272 01:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:32.272 01:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:32.272 01:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:33.650 nvme0n1 00:22:33.650 01:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:33.650 01:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.650 01:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.650 01:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.650 01:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:33.650 01:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:33.650 01:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:33.650 01:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:33.650 01:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:33.650 01:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:33.650 01:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:33.650 01:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:33.650 01:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:34.584 request: 00:22:34.584 { 00:22:34.584 "name": "nvme0", 00:22:34.584 "dhchap_key": "key2", 00:22:34.584 "dhchap_ctrlr_key": "key0", 00:22:34.584 "method": "bdev_nvme_set_keys", 00:22:34.584 "req_id": 1 00:22:34.584 } 00:22:34.584 Got JSON-RPC error response 00:22:34.584 response: 00:22:34.584 { 00:22:34.584 "code": -13, 00:22:34.584 "message": "Permission denied" 00:22:34.584 } 00:22:34.584 01:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:34.584 01:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:34.584 01:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:34.584 01:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:34.584 01:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:34.584 01:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:34.584 01:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.841 01:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:34.841 01:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:36.218 01:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:36.218 01:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:36.218 01:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.218 01:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:36.218 01:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:36.218 01:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:36.218 01:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2963166 00:22:36.218 01:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2963166 ']' 00:22:36.218 01:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2963166 00:22:36.218 01:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:36.218 01:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:36.218 01:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2963166 00:22:36.218 01:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:36.218 01:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:36.218 01:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2963166' 00:22:36.218 killing process with pid 2963166 00:22:36.218 01:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2963166 00:22:36.218 01:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2963166 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:38.757 rmmod nvme_tcp 00:22:38.757 rmmod nvme_fabrics 00:22:38.757 rmmod nvme_keyring 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2986535 ']' 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2986535 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2986535 ']' 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2986535 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2986535 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2986535' 00:22:38.757 killing process with pid 2986535 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2986535 00:22:38.757 01:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2986535 00:22:39.694 01:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:39.694 01:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:39.694 01:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:39.694 01:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:39.694 01:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:39.694 01:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:39.694 01:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:39.694 01:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:39.694 01:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:39.694 01:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.694 01:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.694 01:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Fpo /tmp/spdk.key-sha256.E9P /tmp/spdk.key-sha384.qHr /tmp/spdk.key-sha512.isS /tmp/spdk.key-sha512.28p /tmp/spdk.key-sha384.cuC /tmp/spdk.key-sha256.jpR '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:42.234 00:22:42.234 real 3m46.289s 00:22:42.234 user 8m44.158s 00:22:42.234 sys 0m28.156s 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.234 ************************************ 00:22:42.234 END TEST nvmf_auth_target 00:22:42.234 ************************************ 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:42.234 ************************************ 00:22:42.234 START TEST nvmf_bdevio_no_huge 00:22:42.234 ************************************ 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:42.234 * Looking for test storage... 00:22:42.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:42.234 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:42.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.235 --rc genhtml_branch_coverage=1 00:22:42.235 --rc genhtml_function_coverage=1 00:22:42.235 --rc genhtml_legend=1 00:22:42.235 --rc geninfo_all_blocks=1 00:22:42.235 --rc geninfo_unexecuted_blocks=1 00:22:42.235 00:22:42.235 ' 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:42.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.235 --rc genhtml_branch_coverage=1 00:22:42.235 --rc genhtml_function_coverage=1 00:22:42.235 --rc genhtml_legend=1 00:22:42.235 --rc geninfo_all_blocks=1 00:22:42.235 --rc geninfo_unexecuted_blocks=1 00:22:42.235 00:22:42.235 ' 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:42.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.235 --rc genhtml_branch_coverage=1 00:22:42.235 --rc genhtml_function_coverage=1 00:22:42.235 --rc genhtml_legend=1 00:22:42.235 --rc geninfo_all_blocks=1 00:22:42.235 --rc geninfo_unexecuted_blocks=1 00:22:42.235 00:22:42.235 ' 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:42.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.235 --rc genhtml_branch_coverage=1 00:22:42.235 --rc genhtml_function_coverage=1 00:22:42.235 --rc genhtml_legend=1 00:22:42.235 --rc geninfo_all_blocks=1 00:22:42.235 --rc geninfo_unexecuted_blocks=1 00:22:42.235 00:22:42.235 ' 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.235 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:42.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:42.236 01:00:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.136 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:44.137 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:44.137 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:44.137 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:44.137 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:44.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:22:44.137 00:22:44.137 --- 10.0.0.2 ping statistics --- 00:22:44.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.137 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:22:44.137 00:22:44.137 --- 10.0.0.1 ping statistics --- 00:22:44.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.137 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:44.137 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:44.396 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:44.396 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:44.396 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.396 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:44.396 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2993023 00:22:44.396 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:44.396 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2993023 00:22:44.396 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2993023 ']' 00:22:44.396 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.396 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.396 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.396 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.396 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:44.396 [2024-12-06 01:00:16.956074] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:22:44.396 [2024-12-06 01:00:16.956247] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:44.655 [2024-12-06 01:00:17.125230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:44.655 [2024-12-06 01:00:17.279459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.655 [2024-12-06 01:00:17.279550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.655 [2024-12-06 01:00:17.279576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.655 [2024-12-06 01:00:17.279600] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.655 [2024-12-06 01:00:17.279619] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.655 [2024-12-06 01:00:17.281789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:44.655 [2024-12-06 01:00:17.281858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:44.655 [2024-12-06 01:00:17.281911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:44.655 [2024-12-06 01:00:17.281916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:45.221 01:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:45.221 01:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:45.221 01:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:45.221 01:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:45.221 01:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.221 01:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.221 01:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:45.221 01:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.221 01:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.221 [2024-12-06 01:00:17.928844] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.480 01:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.480 01:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:45.480 01:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.480 01:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.480 Malloc0 00:22:45.480 01:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.480 01:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:45.480 01:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.480 01:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.480 01:00:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.480 01:00:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:45.480 01:00:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.480 01:00:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.480 01:00:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.480 01:00:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:45.480 01:00:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.480 01:00:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.480 [2024-12-06 01:00:18.019571] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.480 01:00:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.480 01:00:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:45.480 01:00:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:45.480 01:00:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:45.480 01:00:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:45.480 01:00:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.480 01:00:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.480 { 00:22:45.480 "params": { 00:22:45.480 "name": "Nvme$subsystem", 00:22:45.480 "trtype": "$TEST_TRANSPORT", 00:22:45.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.480 "adrfam": "ipv4", 00:22:45.480 "trsvcid": "$NVMF_PORT", 00:22:45.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.480 "hdgst": ${hdgst:-false}, 00:22:45.480 "ddgst": ${ddgst:-false} 00:22:45.480 }, 00:22:45.480 "method": "bdev_nvme_attach_controller" 00:22:45.480 } 00:22:45.480 EOF 00:22:45.480 )") 00:22:45.480 01:00:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:45.480 01:00:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:45.480 01:00:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:45.480 01:00:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:45.480 "params": { 00:22:45.480 "name": "Nvme1", 00:22:45.480 "trtype": "tcp", 00:22:45.480 "traddr": "10.0.0.2", 00:22:45.480 "adrfam": "ipv4", 00:22:45.480 "trsvcid": "4420", 00:22:45.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.480 "hdgst": false, 00:22:45.480 "ddgst": false 00:22:45.480 }, 00:22:45.480 "method": "bdev_nvme_attach_controller" 00:22:45.480 }' 00:22:45.480 [2024-12-06 01:00:18.103966] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:22:45.480 [2024-12-06 01:00:18.104107] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2993178 ] 00:22:45.738 [2024-12-06 01:00:18.256214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:45.738 [2024-12-06 01:00:18.399581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.738 [2024-12-06 01:00:18.399626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.738 [2024-12-06 01:00:18.399642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.304 I/O targets: 00:22:46.304 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:46.304 00:22:46.304 00:22:46.304 CUnit - A unit testing framework for C - Version 2.1-3 00:22:46.304 http://cunit.sourceforge.net/ 00:22:46.304 00:22:46.304 00:22:46.304 Suite: bdevio tests on: Nvme1n1 00:22:46.304 Test: blockdev write read block ...passed 00:22:46.304 Test: blockdev write zeroes read block ...passed 00:22:46.304 Test: blockdev write zeroes read no split ...passed 00:22:46.563 Test: blockdev write zeroes read split ...passed 00:22:46.563 Test: blockdev write zeroes read split partial ...passed 00:22:46.563 Test: blockdev reset ...[2024-12-06 01:00:19.092464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:46.563 [2024-12-06 01:00:19.092643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f1100 (9): Bad file descriptor 00:22:46.563 [2024-12-06 01:00:19.153757] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:46.563 passed 00:22:46.563 Test: blockdev write read 8 blocks ...passed 00:22:46.563 Test: blockdev write read size > 128k ...passed 00:22:46.563 Test: blockdev write read invalid size ...passed 00:22:46.563 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:46.563 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:46.563 Test: blockdev write read max offset ...passed 00:22:46.821 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:46.821 Test: blockdev writev readv 8 blocks ...passed 00:22:46.821 Test: blockdev writev readv 30 x 1block ...passed 00:22:46.821 Test: blockdev writev readv block ...passed 00:22:46.821 Test: blockdev writev readv size > 128k ...passed 00:22:46.821 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:46.821 Test: blockdev comparev and writev ...[2024-12-06 01:00:19.409056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.821 [2024-12-06 01:00:19.409150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.821 [2024-12-06 01:00:19.409190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.821 [2024-12-06 01:00:19.409219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:46.821 [2024-12-06 01:00:19.409662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.821 [2024-12-06 01:00:19.409698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:46.821 [2024-12-06 01:00:19.409739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.821 [2024-12-06 01:00:19.409767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:46.821 [2024-12-06 01:00:19.410204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.822 [2024-12-06 01:00:19.410239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:46.822 [2024-12-06 01:00:19.410273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.822 [2024-12-06 01:00:19.410305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:46.822 [2024-12-06 01:00:19.410725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.822 [2024-12-06 01:00:19.410759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:46.822 [2024-12-06 01:00:19.410792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.822 [2024-12-06 01:00:19.410824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:46.822 passed 00:22:46.822 Test: blockdev nvme passthru rw ...passed 00:22:46.822 Test: blockdev nvme passthru vendor specific ...[2024-12-06 01:00:19.493230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:46.822 [2024-12-06 01:00:19.493304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.822 [2024-12-06 01:00:19.493522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:46.822 [2024-12-06 01:00:19.493554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:46.822 [2024-12-06 01:00:19.493752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:46.822 [2024-12-06 01:00:19.493785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:46.822 [2024-12-06 01:00:19.493992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:46.822 [2024-12-06 01:00:19.494026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:46.822 passed 00:22:46.822 Test: blockdev nvme admin passthru ...passed 00:22:47.079 Test: blockdev copy ...passed 00:22:47.079 00:22:47.079 Run Summary: Type Total Ran Passed Failed Inactive 00:22:47.079 suites 1 1 n/a 0 0 00:22:47.079 tests 23 23 23 0 0 00:22:47.079 asserts 152 152 152 0 n/a 00:22:47.079 00:22:47.079 Elapsed time = 1.398 seconds 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.645 rmmod nvme_tcp 00:22:47.645 rmmod nvme_fabrics 00:22:47.645 rmmod nvme_keyring 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2993023 ']' 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2993023 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2993023 ']' 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2993023 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.645 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2993023 00:22:47.903 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:47.903 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:47.903 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2993023' 00:22:47.903 killing process with pid 2993023 00:22:47.903 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2993023 00:22:47.903 01:00:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2993023 00:22:48.470 01:00:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:48.470 01:00:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:48.470 01:00:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:48.470 01:00:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:48.470 01:00:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:48.470 01:00:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:48.470 01:00:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:48.470 01:00:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:48.470 01:00:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:48.470 01:00:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.470 01:00:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.470 01:00:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:51.053 00:22:51.053 real 0m8.735s 00:22:51.053 user 0m19.859s 00:22:51.053 sys 0m2.881s 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:51.053 ************************************ 00:22:51.053 END TEST nvmf_bdevio_no_huge 00:22:51.053 ************************************ 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:51.053 ************************************ 00:22:51.053 START TEST nvmf_tls 00:22:51.053 ************************************ 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:51.053 * Looking for test storage... 00:22:51.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:51.053 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:51.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.053 --rc genhtml_branch_coverage=1 00:22:51.053 --rc genhtml_function_coverage=1 00:22:51.053 --rc genhtml_legend=1 00:22:51.054 --rc geninfo_all_blocks=1 00:22:51.054 --rc geninfo_unexecuted_blocks=1 00:22:51.054 00:22:51.054 ' 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:51.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.054 --rc genhtml_branch_coverage=1 00:22:51.054 --rc genhtml_function_coverage=1 00:22:51.054 --rc genhtml_legend=1 00:22:51.054 --rc geninfo_all_blocks=1 00:22:51.054 --rc geninfo_unexecuted_blocks=1 00:22:51.054 00:22:51.054 ' 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:51.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.054 --rc genhtml_branch_coverage=1 00:22:51.054 --rc genhtml_function_coverage=1 00:22:51.054 --rc genhtml_legend=1 00:22:51.054 --rc geninfo_all_blocks=1 00:22:51.054 --rc geninfo_unexecuted_blocks=1 00:22:51.054 00:22:51.054 ' 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:51.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.054 --rc genhtml_branch_coverage=1 00:22:51.054 --rc genhtml_function_coverage=1 00:22:51.054 --rc genhtml_legend=1 00:22:51.054 --rc geninfo_all_blocks=1 00:22:51.054 --rc geninfo_unexecuted_blocks=1 00:22:51.054 00:22:51.054 ' 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:51.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:51.054 01:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:52.958 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:52.958 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:52.958 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:52.958 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.958 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:52.959 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:52.959 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:52.959 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:52.959 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:52.959 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:52.959 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:52.959 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.959 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:52.959 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:52.959 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:52.959 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:53.216 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:53.216 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:53.216 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:53.216 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:53.216 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:53.216 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:53.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:53.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:22:53.217 00:22:53.217 --- 10.0.0.2 ping statistics --- 00:22:53.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.217 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:53.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:53.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:22:53.217 00:22:53.217 --- 10.0.0.1 ping statistics --- 00:22:53.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.217 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2995529 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2995529 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2995529 ']' 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.217 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.473 [2024-12-06 01:00:25.941623] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:22:53.473 [2024-12-06 01:00:25.941761] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.473 [2024-12-06 01:00:26.084247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.730 [2024-12-06 01:00:26.213700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.730 [2024-12-06 01:00:26.213772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.730 [2024-12-06 01:00:26.213808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.730 [2024-12-06 01:00:26.213844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.730 [2024-12-06 01:00:26.213865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.730 [2024-12-06 01:00:26.215478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.295 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.295 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:54.295 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:54.295 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:54.295 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.295 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.295 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:54.295 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:54.551 true 00:22:54.551 01:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:54.551 01:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:55.113 01:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:55.113 01:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:55.113 01:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:55.113 01:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:55.113 01:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:55.678 01:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:55.679 01:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:55.679 01:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:55.679 01:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:55.679 01:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:56.244 01:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:56.244 01:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:56.244 01:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:56.244 01:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:56.501 01:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:56.501 01:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:56.501 01:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:56.758 01:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:56.758 01:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:57.016 01:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:57.016 01:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:57.016 01:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:57.273 01:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:57.273 01:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.PqvDw19XYz 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.72KDbxi73C 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.PqvDw19XYz 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.72KDbxi73C 00:22:57.531 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:57.788 01:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:58.723 01:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.PqvDw19XYz 00:22:58.723 01:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.PqvDw19XYz 00:22:58.723 01:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:58.723 [2024-12-06 01:00:31.426813] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.982 01:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:59.240 01:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:59.499 [2024-12-06 01:00:31.952179] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:59.499 [2024-12-06 01:00:31.952549] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.499 01:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:59.757 malloc0 00:22:59.757 01:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:00.016 01:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.PqvDw19XYz 00:23:00.275 01:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:00.533 01:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.PqvDw19XYz 00:23:12.731 Initializing NVMe Controllers 00:23:12.731 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:12.731 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:12.731 Initialization complete. Launching workers. 00:23:12.731 ======================================================== 00:23:12.731 Latency(us) 00:23:12.731 Device Information : IOPS MiB/s Average min max 00:23:12.731 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5650.99 22.07 11330.78 1935.23 12783.58 00:23:12.731 ======================================================== 00:23:12.731 Total : 5650.99 22.07 11330.78 1935.23 12783.58 00:23:12.731 00:23:12.731 01:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PqvDw19XYz 00:23:12.731 01:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:12.731 01:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:12.731 01:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:12.731 01:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PqvDw19XYz 00:23:12.731 01:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:12.731 01:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2997566 00:23:12.731 01:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:12.731 01:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:12.731 01:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2997566 /var/tmp/bdevperf.sock 00:23:12.731 01:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2997566 ']' 00:23:12.731 01:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:12.731 01:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.731 01:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:12.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:12.731 01:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.731 01:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.731 [2024-12-06 01:00:43.551919] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:23:12.731 [2024-12-06 01:00:43.552061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2997566 ] 00:23:12.731 [2024-12-06 01:00:43.685926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.731 [2024-12-06 01:00:43.809429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.731 01:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.731 01:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:12.731 01:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PqvDw19XYz 00:23:12.731 01:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:12.731 [2024-12-06 01:00:45.158389] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:12.731 TLSTESTn1 00:23:12.731 01:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:12.731 Running I/O for 10 seconds... 00:23:15.036 2612.00 IOPS, 10.20 MiB/s [2024-12-06T00:00:48.679Z] 2642.00 IOPS, 10.32 MiB/s [2024-12-06T00:00:49.613Z] 2659.33 IOPS, 10.39 MiB/s [2024-12-06T00:00:50.547Z] 2662.75 IOPS, 10.40 MiB/s [2024-12-06T00:00:51.482Z] 2663.80 IOPS, 10.41 MiB/s [2024-12-06T00:00:52.415Z] 2666.33 IOPS, 10.42 MiB/s [2024-12-06T00:00:53.791Z] 2673.43 IOPS, 10.44 MiB/s [2024-12-06T00:00:54.727Z] 2675.75 IOPS, 10.45 MiB/s [2024-12-06T00:00:55.671Z] 2673.11 IOPS, 10.44 MiB/s [2024-12-06T00:00:55.671Z] 2674.50 IOPS, 10.45 MiB/s 00:23:22.962 Latency(us) 00:23:22.962 [2024-12-06T00:00:55.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.962 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:22.962 Verification LBA range: start 0x0 length 0x2000 00:23:22.962 TLSTESTn1 : 10.04 2676.78 10.46 0.00 0.00 47713.69 10340.12 36505.98 00:23:22.962 [2024-12-06T00:00:55.671Z] =================================================================================================================== 00:23:22.962 [2024-12-06T00:00:55.671Z] Total : 2676.78 10.46 0.00 0.00 47713.69 10340.12 36505.98 00:23:22.962 { 00:23:22.962 "results": [ 00:23:22.962 { 00:23:22.962 "job": "TLSTESTn1", 00:23:22.962 "core_mask": "0x4", 00:23:22.962 "workload": "verify", 00:23:22.962 "status": "finished", 00:23:22.962 "verify_range": { 00:23:22.962 "start": 0, 00:23:22.962 "length": 8192 00:23:22.962 }, 00:23:22.962 "queue_depth": 128, 00:23:22.962 "io_size": 4096, 00:23:22.962 "runtime": 10.038163, 00:23:22.962 "iops": 2676.7845869807056, 00:23:22.962 "mibps": 10.456189792893381, 00:23:22.962 "io_failed": 0, 00:23:22.962 "io_timeout": 0, 00:23:22.962 "avg_latency_us": 47713.68854705096, 00:23:22.962 "min_latency_us": 10340.124444444444, 00:23:22.962 "max_latency_us": 36505.97925925926 00:23:22.962 } 00:23:22.962 ], 00:23:22.962 "core_count": 1 00:23:22.963 } 00:23:22.963 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:22.963 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2997566 00:23:22.963 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2997566 ']' 00:23:22.963 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2997566 00:23:22.963 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:22.963 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.963 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2997566 00:23:22.963 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:22.963 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:22.963 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2997566' 00:23:22.963 killing process with pid 2997566 00:23:22.963 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2997566 00:23:22.963 Received shutdown signal, test time was about 10.000000 seconds 00:23:22.963 00:23:22.963 Latency(us) 00:23:22.963 [2024-12-06T00:00:55.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.963 [2024-12-06T00:00:55.672Z] =================================================================================================================== 00:23:22.963 [2024-12-06T00:00:55.672Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:22.963 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2997566 00:23:23.965 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.72KDbxi73C 00:23:23.965 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:23.965 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.72KDbxi73C 00:23:23.965 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:23.965 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:23.965 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:23.965 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:23.965 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.72KDbxi73C 00:23:23.965 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:23.965 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:23.965 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:23.965 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.72KDbxi73C 00:23:23.965 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:23.965 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2999021 00:23:23.966 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:23.966 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:23.966 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2999021 /var/tmp/bdevperf.sock 00:23:23.966 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2999021 ']' 00:23:23.966 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.966 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.966 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.966 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.966 01:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.966 [2024-12-06 01:00:56.449603] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:23:23.966 [2024-12-06 01:00:56.449738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2999021 ] 00:23:23.966 [2024-12-06 01:00:56.595226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.224 [2024-12-06 01:00:56.720173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.789 01:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.789 01:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:24.789 01:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.72KDbxi73C 00:23:25.355 01:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:25.355 [2024-12-06 01:00:58.047214] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:25.355 [2024-12-06 01:00:58.061634] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:25.355 [2024-12-06 01:00:58.061975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:25.355 [2024-12-06 01:00:58.062949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:25.612 [2024-12-06 01:00:58.063940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:25.612 [2024-12-06 01:00:58.063994] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:25.613 [2024-12-06 01:00:58.064019] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:25.613 [2024-12-06 01:00:58.064048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:25.613 request: 00:23:25.613 { 00:23:25.613 "name": "TLSTEST", 00:23:25.613 "trtype": "tcp", 00:23:25.613 "traddr": "10.0.0.2", 00:23:25.613 "adrfam": "ipv4", 00:23:25.613 "trsvcid": "4420", 00:23:25.613 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.613 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:25.613 "prchk_reftag": false, 00:23:25.613 "prchk_guard": false, 00:23:25.613 "hdgst": false, 00:23:25.613 "ddgst": false, 00:23:25.613 "psk": "key0", 00:23:25.613 "allow_unrecognized_csi": false, 00:23:25.613 "method": "bdev_nvme_attach_controller", 00:23:25.613 "req_id": 1 00:23:25.613 } 00:23:25.613 Got JSON-RPC error response 00:23:25.613 response: 00:23:25.613 { 00:23:25.613 "code": -5, 00:23:25.613 "message": "Input/output error" 00:23:25.613 } 00:23:25.613 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2999021 00:23:25.613 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2999021 ']' 00:23:25.613 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2999021 00:23:25.613 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:25.613 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:25.613 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2999021 00:23:25.613 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:25.613 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:25.613 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2999021' 00:23:25.613 killing process with pid 2999021 00:23:25.613 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2999021 00:23:25.613 Received shutdown signal, test time was about 10.000000 seconds 00:23:25.613 00:23:25.613 Latency(us) 00:23:25.613 [2024-12-06T00:00:58.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.613 [2024-12-06T00:00:58.322Z] =================================================================================================================== 00:23:25.613 [2024-12-06T00:00:58.322Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:25.613 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2999021 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PqvDw19XYz 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PqvDw19XYz 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PqvDw19XYz 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PqvDw19XYz 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2999417 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2999417 /var/tmp/bdevperf.sock 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2999417 ']' 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.549 01:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.549 [2024-12-06 01:00:58.987279] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:23:26.549 [2024-12-06 01:00:58.987425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2999417 ] 00:23:26.549 [2024-12-06 01:00:59.121680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.549 [2024-12-06 01:00:59.239298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.486 01:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.486 01:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:27.487 01:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PqvDw19XYz 00:23:27.744 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:28.004 [2024-12-06 01:01:00.486024] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:28.004 [2024-12-06 01:01:00.497588] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:28.004 [2024-12-06 01:01:00.497625] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:28.004 [2024-12-06 01:01:00.497712] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:28.004 [2024-12-06 01:01:00.497765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:28.004 [2024-12-06 01:01:00.498748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:28.004 [2024-12-06 01:01:00.499734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:28.004 [2024-12-06 01:01:00.499762] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:28.004 [2024-12-06 01:01:00.499788] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:28.004 [2024-12-06 01:01:00.499840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:28.004 request: 00:23:28.004 { 00:23:28.004 "name": "TLSTEST", 00:23:28.004 "trtype": "tcp", 00:23:28.004 "traddr": "10.0.0.2", 00:23:28.004 "adrfam": "ipv4", 00:23:28.004 "trsvcid": "4420", 00:23:28.004 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.004 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:28.004 "prchk_reftag": false, 00:23:28.004 "prchk_guard": false, 00:23:28.004 "hdgst": false, 00:23:28.004 "ddgst": false, 00:23:28.004 "psk": "key0", 00:23:28.004 "allow_unrecognized_csi": false, 00:23:28.004 "method": "bdev_nvme_attach_controller", 00:23:28.004 "req_id": 1 00:23:28.004 } 00:23:28.004 Got JSON-RPC error response 00:23:28.004 response: 00:23:28.004 { 00:23:28.004 "code": -5, 00:23:28.004 "message": "Input/output error" 00:23:28.004 } 00:23:28.004 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2999417 00:23:28.004 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2999417 ']' 00:23:28.004 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2999417 00:23:28.004 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:28.004 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.004 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2999417 00:23:28.005 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:28.005 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:28.005 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2999417' 00:23:28.005 killing process with pid 2999417 00:23:28.005 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2999417 00:23:28.005 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.005 00:23:28.005 Latency(us) 00:23:28.005 [2024-12-06T00:01:00.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.005 [2024-12-06T00:01:00.714Z] =================================================================================================================== 00:23:28.005 [2024-12-06T00:01:00.714Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:28.005 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2999417 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PqvDw19XYz 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PqvDw19XYz 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PqvDw19XYz 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PqvDw19XYz 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2999691 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2999691 /var/tmp/bdevperf.sock 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2999691 ']' 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.944 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.944 [2024-12-06 01:01:01.433063] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:23:28.944 [2024-12-06 01:01:01.433205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2999691 ] 00:23:28.944 [2024-12-06 01:01:01.568780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.203 [2024-12-06 01:01:01.691361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.769 01:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.769 01:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:29.769 01:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PqvDw19XYz 00:23:30.026 01:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:30.593 [2024-12-06 01:01:02.998260] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:30.593 [2024-12-06 01:01:03.007763] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:30.593 [2024-12-06 01:01:03.007802] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:30.593 [2024-12-06 01:01:03.007867] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:30.593 [2024-12-06 01:01:03.007967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:30.593 [2024-12-06 01:01:03.008936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:30.593 [2024-12-06 01:01:03.009936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:30.593 [2024-12-06 01:01:03.009968] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:30.593 [2024-12-06 01:01:03.009997] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:30.593 [2024-12-06 01:01:03.010040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:30.593 request: 00:23:30.593 { 00:23:30.593 "name": "TLSTEST", 00:23:30.593 "trtype": "tcp", 00:23:30.593 "traddr": "10.0.0.2", 00:23:30.593 "adrfam": "ipv4", 00:23:30.593 "trsvcid": "4420", 00:23:30.593 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:30.593 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:30.593 "prchk_reftag": false, 00:23:30.593 "prchk_guard": false, 00:23:30.593 "hdgst": false, 00:23:30.593 "ddgst": false, 00:23:30.593 "psk": "key0", 00:23:30.593 "allow_unrecognized_csi": false, 00:23:30.593 "method": "bdev_nvme_attach_controller", 00:23:30.593 "req_id": 1 00:23:30.593 } 00:23:30.593 Got JSON-RPC error response 00:23:30.593 response: 00:23:30.593 { 00:23:30.593 "code": -5, 00:23:30.593 "message": "Input/output error" 00:23:30.593 } 00:23:30.593 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2999691 00:23:30.593 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2999691 ']' 00:23:30.593 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2999691 00:23:30.593 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:30.593 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.593 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2999691 00:23:30.593 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:30.593 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:30.593 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2999691' 00:23:30.593 killing process with pid 2999691 00:23:30.593 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2999691 00:23:30.593 Received shutdown signal, test time was about 10.000000 seconds 00:23:30.593 00:23:30.593 Latency(us) 00:23:30.593 [2024-12-06T00:01:03.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.593 [2024-12-06T00:01:03.302Z] =================================================================================================================== 00:23:30.593 [2024-12-06T00:01:03.302Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:30.593 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2999691 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2999973 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2999973 /var/tmp/bdevperf.sock 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2999973 ']' 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.530 01:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.530 [2024-12-06 01:01:03.950899] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:23:31.530 [2024-12-06 01:01:03.951037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2999973 ] 00:23:31.530 [2024-12-06 01:01:04.088188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.530 [2024-12-06 01:01:04.210285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.466 01:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.466 01:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:32.466 01:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:32.728 [2024-12-06 01:01:05.185289] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:32.728 [2024-12-06 01:01:05.185348] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:32.728 request: 00:23:32.728 { 00:23:32.728 "name": "key0", 00:23:32.728 "path": "", 00:23:32.728 "method": "keyring_file_add_key", 00:23:32.728 "req_id": 1 00:23:32.728 } 00:23:32.728 Got JSON-RPC error response 00:23:32.728 response: 00:23:32.728 { 00:23:32.728 "code": -1, 00:23:32.728 "message": "Operation not permitted" 00:23:32.728 } 00:23:32.728 01:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:32.985 [2024-12-06 01:01:05.446156] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:32.985 [2024-12-06 01:01:05.446233] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:32.985 request: 00:23:32.985 { 00:23:32.985 "name": "TLSTEST", 00:23:32.985 "trtype": "tcp", 00:23:32.985 "traddr": "10.0.0.2", 00:23:32.985 "adrfam": "ipv4", 00:23:32.985 "trsvcid": "4420", 00:23:32.985 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.985 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:32.985 "prchk_reftag": false, 00:23:32.985 "prchk_guard": false, 00:23:32.985 "hdgst": false, 00:23:32.985 "ddgst": false, 00:23:32.985 "psk": "key0", 00:23:32.985 "allow_unrecognized_csi": false, 00:23:32.985 "method": "bdev_nvme_attach_controller", 00:23:32.985 "req_id": 1 00:23:32.985 } 00:23:32.985 Got JSON-RPC error response 00:23:32.985 response: 00:23:32.985 { 00:23:32.985 "code": -126, 00:23:32.985 "message": "Required key not available" 00:23:32.985 } 00:23:32.985 01:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2999973 00:23:32.985 01:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2999973 ']' 00:23:32.985 01:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2999973 00:23:32.985 01:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:32.985 01:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.985 01:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2999973 00:23:32.985 01:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:32.985 01:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:32.985 01:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2999973' 00:23:32.985 killing process with pid 2999973 00:23:32.985 01:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2999973 00:23:32.985 Received shutdown signal, test time was about 10.000000 seconds 00:23:32.985 00:23:32.985 Latency(us) 00:23:32.985 [2024-12-06T00:01:05.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.985 [2024-12-06T00:01:05.694Z] =================================================================================================================== 00:23:32.985 [2024-12-06T00:01:05.694Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:32.986 01:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2999973 00:23:33.926 01:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:33.926 01:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:33.926 01:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:33.926 01:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:33.926 01:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:33.926 01:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2995529 00:23:33.926 01:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2995529 ']' 00:23:33.926 01:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2995529 00:23:33.926 01:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:33.926 01:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.926 01:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2995529 00:23:33.926 01:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:33.926 01:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:33.926 01:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2995529' 00:23:33.926 killing process with pid 2995529 00:23:33.927 01:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2995529 00:23:33.927 01:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2995529 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.sdwf5y7n8J 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.sdwf5y7n8J 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3000396 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3000396 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3000396 ']' 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.309 01:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.309 [2024-12-06 01:01:07.710176] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:23:35.309 [2024-12-06 01:01:07.710312] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.309 [2024-12-06 01:01:07.857128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.309 [2024-12-06 01:01:07.995218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.309 [2024-12-06 01:01:07.995322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.309 [2024-12-06 01:01:07.995348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.309 [2024-12-06 01:01:07.995372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.309 [2024-12-06 01:01:07.995391] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.309 [2024-12-06 01:01:07.997089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.251 01:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.251 01:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:36.251 01:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:36.251 01:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:36.251 01:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.251 01:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.251 01:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.sdwf5y7n8J 00:23:36.251 01:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.sdwf5y7n8J 00:23:36.251 01:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:36.509 [2024-12-06 01:01:09.046016] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.509 01:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:36.767 01:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:37.027 [2024-12-06 01:01:09.687876] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:37.027 [2024-12-06 01:01:09.688302] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.027 01:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:37.602 malloc0 00:23:37.602 01:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:37.864 01:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.sdwf5y7n8J 00:23:38.124 01:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:38.382 01:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sdwf5y7n8J 00:23:38.382 01:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:38.382 01:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:38.382 01:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:38.382 01:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.sdwf5y7n8J 00:23:38.382 01:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:38.382 01:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3000815 00:23:38.382 01:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:38.382 01:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:38.382 01:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3000815 /var/tmp/bdevperf.sock 00:23:38.382 01:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3000815 ']' 00:23:38.382 01:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.382 01:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.382 01:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.382 01:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.382 01:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.382 [2024-12-06 01:01:10.959198] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:23:38.382 [2024-12-06 01:01:10.959332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3000815 ] 00:23:38.640 [2024-12-06 01:01:11.092856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.640 [2024-12-06 01:01:11.210603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.576 01:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.576 01:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:39.576 01:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sdwf5y7n8J 00:23:39.833 01:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:40.090 [2024-12-06 01:01:12.560951] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:40.090 TLSTESTn1 00:23:40.091 01:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:40.091 Running I/O for 10 seconds... 00:23:42.409 2370.00 IOPS, 9.26 MiB/s [2024-12-06T00:01:16.053Z] 2409.00 IOPS, 9.41 MiB/s [2024-12-06T00:01:16.990Z] 2440.00 IOPS, 9.53 MiB/s [2024-12-06T00:01:17.927Z] 2440.50 IOPS, 9.53 MiB/s [2024-12-06T00:01:18.865Z] 2445.60 IOPS, 9.55 MiB/s [2024-12-06T00:01:19.807Z] 2446.00 IOPS, 9.55 MiB/s [2024-12-06T00:01:21.186Z] 2457.86 IOPS, 9.60 MiB/s [2024-12-06T00:01:22.123Z] 2443.12 IOPS, 9.54 MiB/s [2024-12-06T00:01:23.062Z] 2452.22 IOPS, 9.58 MiB/s [2024-12-06T00:01:23.062Z] 2459.20 IOPS, 9.61 MiB/s 00:23:50.353 Latency(us) 00:23:50.353 [2024-12-06T00:01:23.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.353 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:50.353 Verification LBA range: start 0x0 length 0x2000 00:23:50.353 TLSTESTn1 : 10.04 2463.15 9.62 0.00 0.00 51851.19 13883.92 49127.73 00:23:50.353 [2024-12-06T00:01:23.062Z] =================================================================================================================== 00:23:50.353 [2024-12-06T00:01:23.062Z] Total : 2463.15 9.62 0.00 0.00 51851.19 13883.92 49127.73 00:23:50.353 { 00:23:50.353 "results": [ 00:23:50.353 { 00:23:50.353 "job": "TLSTESTn1", 00:23:50.353 "core_mask": "0x4", 00:23:50.353 "workload": "verify", 00:23:50.353 "status": "finished", 00:23:50.353 "verify_range": { 00:23:50.353 "start": 0, 00:23:50.353 "length": 8192 00:23:50.353 }, 00:23:50.353 "queue_depth": 128, 00:23:50.353 "io_size": 4096, 00:23:50.353 "runtime": 10.035523, 00:23:50.353 "iops": 2463.150151716059, 00:23:50.353 "mibps": 9.621680280140856, 00:23:50.353 "io_failed": 0, 00:23:50.353 "io_timeout": 0, 00:23:50.353 "avg_latency_us": 51851.18515629753, 00:23:50.353 "min_latency_us": 13883.922962962963, 00:23:50.353 "max_latency_us": 49127.72740740741 00:23:50.353 } 00:23:50.353 ], 00:23:50.353 "core_count": 1 00:23:50.353 } 00:23:50.353 01:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:50.353 01:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3000815 00:23:50.353 01:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3000815 ']' 00:23:50.353 01:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3000815 00:23:50.353 01:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:50.353 01:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.353 01:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3000815 00:23:50.353 01:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:50.353 01:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:50.353 01:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3000815' 00:23:50.353 killing process with pid 3000815 00:23:50.353 01:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3000815 00:23:50.353 Received shutdown signal, test time was about 10.000000 seconds 00:23:50.353 00:23:50.353 Latency(us) 00:23:50.353 [2024-12-06T00:01:23.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.353 [2024-12-06T00:01:23.062Z] =================================================================================================================== 00:23:50.353 [2024-12-06T00:01:23.062Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:50.353 01:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3000815 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.sdwf5y7n8J 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sdwf5y7n8J 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sdwf5y7n8J 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sdwf5y7n8J 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.sdwf5y7n8J 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3002268 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3002268 /var/tmp/bdevperf.sock 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3002268 ']' 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.368 01:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.368 [2024-12-06 01:01:23.819282] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:23:51.368 [2024-12-06 01:01:23.819423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3002268 ] 00:23:51.368 [2024-12-06 01:01:23.961306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.627 [2024-12-06 01:01:24.085157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.191 01:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.191 01:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:52.191 01:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sdwf5y7n8J 00:23:52.450 [2024-12-06 01:01:25.139226] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.sdwf5y7n8J': 0100666 00:23:52.450 [2024-12-06 01:01:25.139284] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:52.450 request: 00:23:52.450 { 00:23:52.450 "name": "key0", 00:23:52.450 "path": "/tmp/tmp.sdwf5y7n8J", 00:23:52.450 "method": "keyring_file_add_key", 00:23:52.450 "req_id": 1 00:23:52.450 } 00:23:52.450 Got JSON-RPC error response 00:23:52.450 response: 00:23:52.450 { 00:23:52.450 "code": -1, 00:23:52.450 "message": "Operation not permitted" 00:23:52.450 } 00:23:52.708 01:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:52.964 [2024-12-06 01:01:25.436265] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:52.964 [2024-12-06 01:01:25.436352] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:52.964 request: 00:23:52.964 { 00:23:52.964 "name": "TLSTEST", 00:23:52.964 "trtype": "tcp", 00:23:52.964 "traddr": "10.0.0.2", 00:23:52.964 "adrfam": "ipv4", 00:23:52.964 "trsvcid": "4420", 00:23:52.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.964 "prchk_reftag": false, 00:23:52.964 "prchk_guard": false, 00:23:52.964 "hdgst": false, 00:23:52.964 "ddgst": false, 00:23:52.964 "psk": "key0", 00:23:52.964 "allow_unrecognized_csi": false, 00:23:52.964 "method": "bdev_nvme_attach_controller", 00:23:52.964 "req_id": 1 00:23:52.964 } 00:23:52.964 Got JSON-RPC error response 00:23:52.964 response: 00:23:52.964 { 00:23:52.964 "code": -126, 00:23:52.964 "message": "Required key not available" 00:23:52.964 } 00:23:52.964 01:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3002268 00:23:52.964 01:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3002268 ']' 00:23:52.964 01:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3002268 00:23:52.964 01:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:52.964 01:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.964 01:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3002268 00:23:52.964 01:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:52.965 01:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:52.965 01:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3002268' 00:23:52.965 killing process with pid 3002268 00:23:52.965 01:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3002268 00:23:52.965 Received shutdown signal, test time was about 10.000000 seconds 00:23:52.965 00:23:52.965 Latency(us) 00:23:52.965 [2024-12-06T00:01:25.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.965 [2024-12-06T00:01:25.674Z] =================================================================================================================== 00:23:52.965 [2024-12-06T00:01:25.674Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:52.965 01:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3002268 00:23:53.898 01:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:53.898 01:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:53.898 01:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:53.898 01:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:53.899 01:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:53.899 01:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3000396 00:23:53.899 01:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3000396 ']' 00:23:53.899 01:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3000396 00:23:53.899 01:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:53.899 01:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.899 01:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3000396 00:23:53.899 01:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:53.899 01:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:53.899 01:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3000396' 00:23:53.899 killing process with pid 3000396 00:23:53.899 01:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3000396 00:23:53.899 01:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3000396 00:23:55.274 01:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:55.274 01:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:55.274 01:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.274 01:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.274 01:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3002801 00:23:55.274 01:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:55.274 01:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3002801 00:23:55.274 01:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3002801 ']' 00:23:55.274 01:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.274 01:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.274 01:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.274 01:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.274 01:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.274 [2024-12-06 01:01:27.697418] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:23:55.274 [2024-12-06 01:01:27.697565] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.274 [2024-12-06 01:01:27.847346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.274 [2024-12-06 01:01:27.981996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.274 [2024-12-06 01:01:27.982094] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.274 [2024-12-06 01:01:27.982129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.274 [2024-12-06 01:01:27.982153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.274 [2024-12-06 01:01:27.982174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.535 [2024-12-06 01:01:27.983787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.103 01:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.103 01:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:56.103 01:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:56.103 01:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:56.103 01:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.103 01:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.103 01:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.sdwf5y7n8J 00:23:56.103 01:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:56.103 01:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.sdwf5y7n8J 00:23:56.103 01:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:56.103 01:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.103 01:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:56.103 01:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.103 01:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.sdwf5y7n8J 00:23:56.103 01:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.sdwf5y7n8J 00:23:56.103 01:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:56.361 [2024-12-06 01:01:29.039193] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.361 01:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:56.929 01:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:57.187 [2024-12-06 01:01:29.664903] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:57.187 [2024-12-06 01:01:29.665343] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.187 01:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:57.447 malloc0 00:23:57.447 01:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:57.708 01:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.sdwf5y7n8J 00:23:57.966 [2024-12-06 01:01:30.624867] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.sdwf5y7n8J': 0100666 00:23:57.966 [2024-12-06 01:01:30.624938] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:57.966 request: 00:23:57.966 { 00:23:57.966 "name": "key0", 00:23:57.966 "path": "/tmp/tmp.sdwf5y7n8J", 00:23:57.966 "method": "keyring_file_add_key", 00:23:57.966 "req_id": 1 00:23:57.966 } 00:23:57.966 Got JSON-RPC error response 00:23:57.966 response: 00:23:57.966 { 00:23:57.966 "code": -1, 00:23:57.966 "message": "Operation not permitted" 00:23:57.966 } 00:23:57.967 01:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:58.225 [2024-12-06 01:01:30.897651] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:58.225 [2024-12-06 01:01:30.897745] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:58.225 request: 00:23:58.225 { 00:23:58.225 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.225 "host": "nqn.2016-06.io.spdk:host1", 00:23:58.225 "psk": "key0", 00:23:58.225 "method": "nvmf_subsystem_add_host", 00:23:58.225 "req_id": 1 00:23:58.225 } 00:23:58.225 Got JSON-RPC error response 00:23:58.225 response: 00:23:58.225 { 00:23:58.225 "code": -32603, 00:23:58.225 "message": "Internal error" 00:23:58.225 } 00:23:58.225 01:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:58.225 01:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:58.225 01:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:58.225 01:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:58.225 01:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3002801 00:23:58.225 01:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3002801 ']' 00:23:58.225 01:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3002801 00:23:58.225 01:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:58.225 01:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:58.225 01:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3002801 00:23:58.482 01:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:58.483 01:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:58.483 01:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3002801' 00:23:58.483 killing process with pid 3002801 00:23:58.483 01:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3002801 00:23:58.483 01:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3002801 00:23:59.860 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.sdwf5y7n8J 00:23:59.860 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:59.860 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:59.860 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:59.860 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.860 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3003365 00:23:59.860 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:59.860 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3003365 00:23:59.860 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3003365 ']' 00:23:59.860 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.860 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.860 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.860 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.860 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.860 [2024-12-06 01:01:32.332033] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:23:59.860 [2024-12-06 01:01:32.332199] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.860 [2024-12-06 01:01:32.497564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.120 [2024-12-06 01:01:32.633550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.120 [2024-12-06 01:01:32.633645] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.120 [2024-12-06 01:01:32.633671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.120 [2024-12-06 01:01:32.633700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.120 [2024-12-06 01:01:32.633720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.120 [2024-12-06 01:01:32.635406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.687 01:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.687 01:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:00.687 01:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:00.687 01:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:00.687 01:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.687 01:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.687 01:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.sdwf5y7n8J 00:24:00.687 01:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.sdwf5y7n8J 00:24:00.687 01:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:00.947 [2024-12-06 01:01:33.629164] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.947 01:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:01.514 01:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:01.772 [2024-12-06 01:01:34.262893] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:01.772 [2024-12-06 01:01:34.263268] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.772 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:02.030 malloc0 00:24:02.030 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:02.289 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.sdwf5y7n8J 00:24:02.545 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:02.802 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3003779 00:24:02.802 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:02.802 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:02.802 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3003779 /var/tmp/bdevperf.sock 00:24:02.802 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3003779 ']' 00:24:02.802 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:02.802 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:02.802 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:02.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:02.802 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:02.802 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.061 [2024-12-06 01:01:35.534593] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:24:03.061 [2024-12-06 01:01:35.534732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3003779 ] 00:24:03.061 [2024-12-06 01:01:35.667856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.320 [2024-12-06 01:01:35.787502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.885 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.885 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:03.885 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sdwf5y7n8J 00:24:04.143 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:04.402 [2024-12-06 01:01:37.008715] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:04.402 TLSTESTn1 00:24:04.402 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:04.971 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:04.971 "subsystems": [ 00:24:04.971 { 00:24:04.971 "subsystem": "keyring", 00:24:04.971 "config": [ 00:24:04.971 { 00:24:04.971 "method": "keyring_file_add_key", 00:24:04.971 "params": { 00:24:04.971 "name": "key0", 00:24:04.971 "path": "/tmp/tmp.sdwf5y7n8J" 00:24:04.971 } 00:24:04.971 } 00:24:04.971 ] 00:24:04.971 }, 00:24:04.971 { 00:24:04.971 "subsystem": "iobuf", 00:24:04.972 "config": [ 00:24:04.972 { 00:24:04.972 "method": "iobuf_set_options", 00:24:04.972 "params": { 00:24:04.972 "small_pool_count": 8192, 00:24:04.972 "large_pool_count": 1024, 00:24:04.972 "small_bufsize": 8192, 00:24:04.972 "large_bufsize": 135168, 00:24:04.972 "enable_numa": false 00:24:04.972 } 00:24:04.972 } 00:24:04.972 ] 00:24:04.972 }, 00:24:04.972 { 00:24:04.972 "subsystem": "sock", 00:24:04.972 "config": [ 00:24:04.972 { 00:24:04.972 "method": "sock_set_default_impl", 00:24:04.972 "params": { 00:24:04.972 "impl_name": "posix" 00:24:04.972 } 00:24:04.972 }, 00:24:04.972 { 00:24:04.972 "method": "sock_impl_set_options", 00:24:04.972 "params": { 00:24:04.972 "impl_name": "ssl", 00:24:04.972 "recv_buf_size": 4096, 00:24:04.972 "send_buf_size": 4096, 00:24:04.972 "enable_recv_pipe": true, 00:24:04.972 "enable_quickack": false, 00:24:04.972 "enable_placement_id": 0, 00:24:04.972 "enable_zerocopy_send_server": true, 00:24:04.972 "enable_zerocopy_send_client": false, 00:24:04.972 "zerocopy_threshold": 0, 00:24:04.972 "tls_version": 0, 00:24:04.972 "enable_ktls": false 00:24:04.972 } 00:24:04.972 }, 00:24:04.972 { 00:24:04.972 "method": "sock_impl_set_options", 00:24:04.972 "params": { 00:24:04.972 "impl_name": "posix", 00:24:04.972 "recv_buf_size": 2097152, 00:24:04.972 "send_buf_size": 2097152, 00:24:04.972 "enable_recv_pipe": true, 00:24:04.972 "enable_quickack": false, 00:24:04.972 "enable_placement_id": 0, 00:24:04.972 "enable_zerocopy_send_server": true, 00:24:04.972 "enable_zerocopy_send_client": false, 00:24:04.972 "zerocopy_threshold": 0, 00:24:04.972 "tls_version": 0, 00:24:04.972 "enable_ktls": false 00:24:04.972 } 00:24:04.972 } 00:24:04.972 ] 00:24:04.972 }, 00:24:04.972 { 00:24:04.972 "subsystem": "vmd", 00:24:04.972 "config": [] 00:24:04.972 }, 00:24:04.972 { 00:24:04.972 "subsystem": "accel", 00:24:04.972 "config": [ 00:24:04.972 { 00:24:04.972 "method": "accel_set_options", 00:24:04.972 "params": { 00:24:04.972 "small_cache_size": 128, 00:24:04.972 "large_cache_size": 16, 00:24:04.972 "task_count": 2048, 00:24:04.972 "sequence_count": 2048, 00:24:04.972 "buf_count": 2048 00:24:04.972 } 00:24:04.972 } 00:24:04.972 ] 00:24:04.972 }, 00:24:04.972 { 00:24:04.972 "subsystem": "bdev", 00:24:04.972 "config": [ 00:24:04.972 { 00:24:04.972 "method": "bdev_set_options", 00:24:04.972 "params": { 00:24:04.972 "bdev_io_pool_size": 65535, 00:24:04.972 "bdev_io_cache_size": 256, 00:24:04.972 "bdev_auto_examine": true, 00:24:04.972 "iobuf_small_cache_size": 128, 00:24:04.972 "iobuf_large_cache_size": 16 00:24:04.972 } 00:24:04.972 }, 00:24:04.972 { 00:24:04.972 "method": "bdev_raid_set_options", 00:24:04.972 "params": { 00:24:04.972 "process_window_size_kb": 1024, 00:24:04.972 "process_max_bandwidth_mb_sec": 0 00:24:04.972 } 00:24:04.972 }, 00:24:04.972 { 00:24:04.972 "method": "bdev_iscsi_set_options", 00:24:04.972 "params": { 00:24:04.972 "timeout_sec": 30 00:24:04.972 } 00:24:04.972 }, 00:24:04.972 { 00:24:04.972 "method": "bdev_nvme_set_options", 00:24:04.972 "params": { 00:24:04.972 "action_on_timeout": "none", 00:24:04.972 "timeout_us": 0, 00:24:04.972 "timeout_admin_us": 0, 00:24:04.972 "keep_alive_timeout_ms": 10000, 00:24:04.972 "arbitration_burst": 0, 00:24:04.972 "low_priority_weight": 0, 00:24:04.972 "medium_priority_weight": 0, 00:24:04.972 "high_priority_weight": 0, 00:24:04.972 "nvme_adminq_poll_period_us": 10000, 00:24:04.972 "nvme_ioq_poll_period_us": 0, 00:24:04.972 "io_queue_requests": 0, 00:24:04.972 "delay_cmd_submit": true, 00:24:04.972 "transport_retry_count": 4, 00:24:04.972 "bdev_retry_count": 3, 00:24:04.972 "transport_ack_timeout": 0, 00:24:04.972 "ctrlr_loss_timeout_sec": 0, 00:24:04.972 "reconnect_delay_sec": 0, 00:24:04.972 "fast_io_fail_timeout_sec": 0, 00:24:04.972 "disable_auto_failback": false, 00:24:04.972 "generate_uuids": false, 00:24:04.972 "transport_tos": 0, 00:24:04.972 "nvme_error_stat": false, 00:24:04.972 "rdma_srq_size": 0, 00:24:04.972 "io_path_stat": false, 00:24:04.972 "allow_accel_sequence": false, 00:24:04.972 "rdma_max_cq_size": 0, 00:24:04.972 "rdma_cm_event_timeout_ms": 0, 00:24:04.972 "dhchap_digests": [ 00:24:04.972 "sha256", 00:24:04.972 "sha384", 00:24:04.972 "sha512" 00:24:04.972 ], 00:24:04.972 "dhchap_dhgroups": [ 00:24:04.972 "null", 00:24:04.972 "ffdhe2048", 00:24:04.972 "ffdhe3072", 00:24:04.972 "ffdhe4096", 00:24:04.972 "ffdhe6144", 00:24:04.972 "ffdhe8192" 00:24:04.972 ] 00:24:04.972 } 00:24:04.972 }, 00:24:04.972 { 00:24:04.972 "method": "bdev_nvme_set_hotplug", 00:24:04.972 "params": { 00:24:04.972 "period_us": 100000, 00:24:04.972 "enable": false 00:24:04.972 } 00:24:04.972 }, 00:24:04.972 { 00:24:04.972 "method": "bdev_malloc_create", 00:24:04.972 "params": { 00:24:04.972 "name": "malloc0", 00:24:04.972 "num_blocks": 8192, 00:24:04.972 "block_size": 4096, 00:24:04.972 "physical_block_size": 4096, 00:24:04.972 "uuid": "3557e3ef-d274-4cbc-a43b-d2b3f4a52875", 00:24:04.972 "optimal_io_boundary": 0, 00:24:04.972 "md_size": 0, 00:24:04.972 "dif_type": 0, 00:24:04.972 "dif_is_head_of_md": false, 00:24:04.972 "dif_pi_format": 0 00:24:04.972 } 00:24:04.972 }, 00:24:04.972 { 00:24:04.972 "method": "bdev_wait_for_examine" 00:24:04.972 } 00:24:04.972 ] 00:24:04.972 }, 00:24:04.972 { 00:24:04.972 "subsystem": "nbd", 00:24:04.972 "config": [] 00:24:04.972 }, 00:24:04.972 { 00:24:04.972 "subsystem": "scheduler", 00:24:04.972 "config": [ 00:24:04.972 { 00:24:04.972 "method": "framework_set_scheduler", 00:24:04.972 "params": { 00:24:04.972 "name": "static" 00:24:04.972 } 00:24:04.972 } 00:24:04.972 ] 00:24:04.972 }, 00:24:04.972 { 00:24:04.972 "subsystem": "nvmf", 00:24:04.972 "config": [ 00:24:04.972 { 00:24:04.972 "method": "nvmf_set_config", 00:24:04.972 "params": { 00:24:04.972 "discovery_filter": "match_any", 00:24:04.972 "admin_cmd_passthru": { 00:24:04.972 "identify_ctrlr": false 00:24:04.972 }, 00:24:04.972 "dhchap_digests": [ 00:24:04.972 "sha256", 00:24:04.972 "sha384", 00:24:04.972 "sha512" 00:24:04.972 ], 00:24:04.972 "dhchap_dhgroups": [ 00:24:04.972 "null", 00:24:04.972 "ffdhe2048", 00:24:04.972 "ffdhe3072", 00:24:04.972 "ffdhe4096", 00:24:04.972 "ffdhe6144", 00:24:04.972 "ffdhe8192" 00:24:04.972 ] 00:24:04.972 } 00:24:04.972 }, 00:24:04.972 { 00:24:04.972 "method": "nvmf_set_max_subsystems", 00:24:04.972 "params": { 00:24:04.972 "max_subsystems": 1024 00:24:04.972 } 00:24:04.972 }, 00:24:04.972 { 00:24:04.972 "method": "nvmf_set_crdt", 00:24:04.972 "params": { 00:24:04.972 "crdt1": 0, 00:24:04.972 "crdt2": 0, 00:24:04.972 "crdt3": 0 00:24:04.972 } 00:24:04.972 }, 00:24:04.972 { 00:24:04.972 "method": "nvmf_create_transport", 00:24:04.972 "params": { 00:24:04.972 "trtype": "TCP", 00:24:04.972 "max_queue_depth": 128, 00:24:04.972 "max_io_qpairs_per_ctrlr": 127, 00:24:04.972 "in_capsule_data_size": 4096, 00:24:04.972 "max_io_size": 131072, 00:24:04.972 "io_unit_size": 131072, 00:24:04.972 "max_aq_depth": 128, 00:24:04.973 "num_shared_buffers": 511, 00:24:04.973 "buf_cache_size": 4294967295, 00:24:04.973 "dif_insert_or_strip": false, 00:24:04.973 "zcopy": false, 00:24:04.973 "c2h_success": false, 00:24:04.973 "sock_priority": 0, 00:24:04.973 "abort_timeout_sec": 1, 00:24:04.973 "ack_timeout": 0, 00:24:04.973 "data_wr_pool_size": 0 00:24:04.973 } 00:24:04.973 }, 00:24:04.973 { 00:24:04.973 "method": "nvmf_create_subsystem", 00:24:04.973 "params": { 00:24:04.973 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.973 "allow_any_host": false, 00:24:04.973 "serial_number": "SPDK00000000000001", 00:24:04.973 "model_number": "SPDK bdev Controller", 00:24:04.973 "max_namespaces": 10, 00:24:04.973 "min_cntlid": 1, 00:24:04.973 "max_cntlid": 65519, 00:24:04.973 "ana_reporting": false 00:24:04.973 } 00:24:04.973 }, 00:24:04.973 { 00:24:04.973 "method": "nvmf_subsystem_add_host", 00:24:04.973 "params": { 00:24:04.973 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.973 "host": "nqn.2016-06.io.spdk:host1", 00:24:04.973 "psk": "key0" 00:24:04.973 } 00:24:04.973 }, 00:24:04.973 { 00:24:04.973 "method": "nvmf_subsystem_add_ns", 00:24:04.973 "params": { 00:24:04.973 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.973 "namespace": { 00:24:04.973 "nsid": 1, 00:24:04.973 "bdev_name": "malloc0", 00:24:04.973 "nguid": "3557E3EFD2744CBCA43BD2B3F4A52875", 00:24:04.973 "uuid": "3557e3ef-d274-4cbc-a43b-d2b3f4a52875", 00:24:04.973 "no_auto_visible": false 00:24:04.973 } 00:24:04.973 } 00:24:04.973 }, 00:24:04.973 { 00:24:04.973 "method": "nvmf_subsystem_add_listener", 00:24:04.973 "params": { 00:24:04.973 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.973 "listen_address": { 00:24:04.973 "trtype": "TCP", 00:24:04.973 "adrfam": "IPv4", 00:24:04.973 "traddr": "10.0.0.2", 00:24:04.973 "trsvcid": "4420" 00:24:04.973 }, 00:24:04.973 "secure_channel": true 00:24:04.973 } 00:24:04.973 } 00:24:04.973 ] 00:24:04.973 } 00:24:04.973 ] 00:24:04.973 }' 00:24:04.973 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:05.233 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:05.233 "subsystems": [ 00:24:05.233 { 00:24:05.233 "subsystem": "keyring", 00:24:05.233 "config": [ 00:24:05.233 { 00:24:05.233 "method": "keyring_file_add_key", 00:24:05.233 "params": { 00:24:05.233 "name": "key0", 00:24:05.233 "path": "/tmp/tmp.sdwf5y7n8J" 00:24:05.233 } 00:24:05.233 } 00:24:05.233 ] 00:24:05.233 }, 00:24:05.233 { 00:24:05.233 "subsystem": "iobuf", 00:24:05.233 "config": [ 00:24:05.233 { 00:24:05.233 "method": "iobuf_set_options", 00:24:05.233 "params": { 00:24:05.233 "small_pool_count": 8192, 00:24:05.233 "large_pool_count": 1024, 00:24:05.233 "small_bufsize": 8192, 00:24:05.233 "large_bufsize": 135168, 00:24:05.233 "enable_numa": false 00:24:05.233 } 00:24:05.233 } 00:24:05.233 ] 00:24:05.233 }, 00:24:05.233 { 00:24:05.233 "subsystem": "sock", 00:24:05.233 "config": [ 00:24:05.233 { 00:24:05.233 "method": "sock_set_default_impl", 00:24:05.233 "params": { 00:24:05.233 "impl_name": "posix" 00:24:05.233 } 00:24:05.233 }, 00:24:05.233 { 00:24:05.233 "method": "sock_impl_set_options", 00:24:05.233 "params": { 00:24:05.233 "impl_name": "ssl", 00:24:05.233 "recv_buf_size": 4096, 00:24:05.233 "send_buf_size": 4096, 00:24:05.233 "enable_recv_pipe": true, 00:24:05.233 "enable_quickack": false, 00:24:05.233 "enable_placement_id": 0, 00:24:05.233 "enable_zerocopy_send_server": true, 00:24:05.233 "enable_zerocopy_send_client": false, 00:24:05.233 "zerocopy_threshold": 0, 00:24:05.233 "tls_version": 0, 00:24:05.233 "enable_ktls": false 00:24:05.233 } 00:24:05.233 }, 00:24:05.233 { 00:24:05.233 "method": "sock_impl_set_options", 00:24:05.233 "params": { 00:24:05.233 "impl_name": "posix", 00:24:05.233 "recv_buf_size": 2097152, 00:24:05.233 "send_buf_size": 2097152, 00:24:05.233 "enable_recv_pipe": true, 00:24:05.233 "enable_quickack": false, 00:24:05.233 "enable_placement_id": 0, 00:24:05.233 "enable_zerocopy_send_server": true, 00:24:05.233 "enable_zerocopy_send_client": false, 00:24:05.233 "zerocopy_threshold": 0, 00:24:05.233 "tls_version": 0, 00:24:05.233 "enable_ktls": false 00:24:05.233 } 00:24:05.233 } 00:24:05.233 ] 00:24:05.233 }, 00:24:05.233 { 00:24:05.233 "subsystem": "vmd", 00:24:05.233 "config": [] 00:24:05.233 }, 00:24:05.233 { 00:24:05.233 "subsystem": "accel", 00:24:05.233 "config": [ 00:24:05.233 { 00:24:05.233 "method": "accel_set_options", 00:24:05.233 "params": { 00:24:05.233 "small_cache_size": 128, 00:24:05.233 "large_cache_size": 16, 00:24:05.233 "task_count": 2048, 00:24:05.233 "sequence_count": 2048, 00:24:05.233 "buf_count": 2048 00:24:05.233 } 00:24:05.233 } 00:24:05.233 ] 00:24:05.233 }, 00:24:05.233 { 00:24:05.233 "subsystem": "bdev", 00:24:05.233 "config": [ 00:24:05.233 { 00:24:05.233 "method": "bdev_set_options", 00:24:05.233 "params": { 00:24:05.233 "bdev_io_pool_size": 65535, 00:24:05.233 "bdev_io_cache_size": 256, 00:24:05.233 "bdev_auto_examine": true, 00:24:05.233 "iobuf_small_cache_size": 128, 00:24:05.233 "iobuf_large_cache_size": 16 00:24:05.233 } 00:24:05.233 }, 00:24:05.233 { 00:24:05.233 "method": "bdev_raid_set_options", 00:24:05.233 "params": { 00:24:05.233 "process_window_size_kb": 1024, 00:24:05.233 "process_max_bandwidth_mb_sec": 0 00:24:05.233 } 00:24:05.233 }, 00:24:05.233 { 00:24:05.233 "method": "bdev_iscsi_set_options", 00:24:05.233 "params": { 00:24:05.233 "timeout_sec": 30 00:24:05.233 } 00:24:05.233 }, 00:24:05.233 { 00:24:05.233 "method": "bdev_nvme_set_options", 00:24:05.233 "params": { 00:24:05.233 "action_on_timeout": "none", 00:24:05.233 "timeout_us": 0, 00:24:05.233 "timeout_admin_us": 0, 00:24:05.233 "keep_alive_timeout_ms": 10000, 00:24:05.233 "arbitration_burst": 0, 00:24:05.233 "low_priority_weight": 0, 00:24:05.233 "medium_priority_weight": 0, 00:24:05.233 "high_priority_weight": 0, 00:24:05.233 "nvme_adminq_poll_period_us": 10000, 00:24:05.233 "nvme_ioq_poll_period_us": 0, 00:24:05.233 "io_queue_requests": 512, 00:24:05.233 "delay_cmd_submit": true, 00:24:05.233 "transport_retry_count": 4, 00:24:05.233 "bdev_retry_count": 3, 00:24:05.233 "transport_ack_timeout": 0, 00:24:05.233 "ctrlr_loss_timeout_sec": 0, 00:24:05.233 "reconnect_delay_sec": 0, 00:24:05.233 "fast_io_fail_timeout_sec": 0, 00:24:05.233 "disable_auto_failback": false, 00:24:05.233 "generate_uuids": false, 00:24:05.233 "transport_tos": 0, 00:24:05.233 "nvme_error_stat": false, 00:24:05.233 "rdma_srq_size": 0, 00:24:05.233 "io_path_stat": false, 00:24:05.233 "allow_accel_sequence": false, 00:24:05.233 "rdma_max_cq_size": 0, 00:24:05.233 "rdma_cm_event_timeout_ms": 0, 00:24:05.233 "dhchap_digests": [ 00:24:05.233 "sha256", 00:24:05.233 "sha384", 00:24:05.233 "sha512" 00:24:05.233 ], 00:24:05.233 "dhchap_dhgroups": [ 00:24:05.233 "null", 00:24:05.233 "ffdhe2048", 00:24:05.233 "ffdhe3072", 00:24:05.233 "ffdhe4096", 00:24:05.233 "ffdhe6144", 00:24:05.233 "ffdhe8192" 00:24:05.233 ] 00:24:05.233 } 00:24:05.233 }, 00:24:05.233 { 00:24:05.233 "method": "bdev_nvme_attach_controller", 00:24:05.233 "params": { 00:24:05.233 "name": "TLSTEST", 00:24:05.233 "trtype": "TCP", 00:24:05.233 "adrfam": "IPv4", 00:24:05.233 "traddr": "10.0.0.2", 00:24:05.233 "trsvcid": "4420", 00:24:05.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.233 "prchk_reftag": false, 00:24:05.233 "prchk_guard": false, 00:24:05.233 "ctrlr_loss_timeout_sec": 0, 00:24:05.233 "reconnect_delay_sec": 0, 00:24:05.233 "fast_io_fail_timeout_sec": 0, 00:24:05.233 "psk": "key0", 00:24:05.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:05.233 "hdgst": false, 00:24:05.233 "ddgst": false, 00:24:05.233 "multipath": "multipath" 00:24:05.233 } 00:24:05.233 }, 00:24:05.234 { 00:24:05.234 "method": "bdev_nvme_set_hotplug", 00:24:05.234 "params": { 00:24:05.234 "period_us": 100000, 00:24:05.234 "enable": false 00:24:05.234 } 00:24:05.234 }, 00:24:05.234 { 00:24:05.234 "method": "bdev_wait_for_examine" 00:24:05.234 } 00:24:05.234 ] 00:24:05.234 }, 00:24:05.234 { 00:24:05.234 "subsystem": "nbd", 00:24:05.234 "config": [] 00:24:05.234 } 00:24:05.234 ] 00:24:05.234 }' 00:24:05.234 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3003779 00:24:05.234 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3003779 ']' 00:24:05.234 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3003779 00:24:05.234 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:05.234 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.234 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3003779 00:24:05.234 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:05.234 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:05.234 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3003779' 00:24:05.234 killing process with pid 3003779 00:24:05.234 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3003779 00:24:05.234 Received shutdown signal, test time was about 10.000000 seconds 00:24:05.234 00:24:05.234 Latency(us) 00:24:05.234 [2024-12-06T00:01:37.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.234 [2024-12-06T00:01:37.943Z] =================================================================================================================== 00:24:05.234 [2024-12-06T00:01:37.943Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:05.234 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3003779 00:24:06.175 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3003365 00:24:06.175 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3003365 ']' 00:24:06.175 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3003365 00:24:06.175 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:06.175 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.175 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3003365 00:24:06.175 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:06.175 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:06.175 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3003365' 00:24:06.175 killing process with pid 3003365 00:24:06.175 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3003365 00:24:06.175 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3003365 00:24:07.553 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:07.553 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:07.553 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:07.553 "subsystems": [ 00:24:07.553 { 00:24:07.553 "subsystem": "keyring", 00:24:07.553 "config": [ 00:24:07.553 { 00:24:07.553 "method": "keyring_file_add_key", 00:24:07.553 "params": { 00:24:07.553 "name": "key0", 00:24:07.553 "path": "/tmp/tmp.sdwf5y7n8J" 00:24:07.553 } 00:24:07.553 } 00:24:07.553 ] 00:24:07.553 }, 00:24:07.553 { 00:24:07.553 "subsystem": "iobuf", 00:24:07.553 "config": [ 00:24:07.553 { 00:24:07.553 "method": "iobuf_set_options", 00:24:07.553 "params": { 00:24:07.553 "small_pool_count": 8192, 00:24:07.553 "large_pool_count": 1024, 00:24:07.553 "small_bufsize": 8192, 00:24:07.553 "large_bufsize": 135168, 00:24:07.553 "enable_numa": false 00:24:07.553 } 00:24:07.553 } 00:24:07.553 ] 00:24:07.553 }, 00:24:07.553 { 00:24:07.553 "subsystem": "sock", 00:24:07.553 "config": [ 00:24:07.553 { 00:24:07.553 "method": "sock_set_default_impl", 00:24:07.553 "params": { 00:24:07.553 "impl_name": "posix" 00:24:07.553 } 00:24:07.553 }, 00:24:07.553 { 00:24:07.553 "method": "sock_impl_set_options", 00:24:07.553 "params": { 00:24:07.553 "impl_name": "ssl", 00:24:07.553 "recv_buf_size": 4096, 00:24:07.553 "send_buf_size": 4096, 00:24:07.553 "enable_recv_pipe": true, 00:24:07.553 "enable_quickack": false, 00:24:07.553 "enable_placement_id": 0, 00:24:07.553 "enable_zerocopy_send_server": true, 00:24:07.553 "enable_zerocopy_send_client": false, 00:24:07.553 "zerocopy_threshold": 0, 00:24:07.553 "tls_version": 0, 00:24:07.553 "enable_ktls": false 00:24:07.553 } 00:24:07.553 }, 00:24:07.553 { 00:24:07.553 "method": "sock_impl_set_options", 00:24:07.553 "params": { 00:24:07.553 "impl_name": "posix", 00:24:07.553 "recv_buf_size": 2097152, 00:24:07.553 "send_buf_size": 2097152, 00:24:07.553 "enable_recv_pipe": true, 00:24:07.553 "enable_quickack": false, 00:24:07.553 "enable_placement_id": 0, 00:24:07.553 "enable_zerocopy_send_server": true, 00:24:07.553 "enable_zerocopy_send_client": false, 00:24:07.553 "zerocopy_threshold": 0, 00:24:07.553 "tls_version": 0, 00:24:07.553 "enable_ktls": false 00:24:07.553 } 00:24:07.553 } 00:24:07.553 ] 00:24:07.553 }, 00:24:07.553 { 00:24:07.553 "subsystem": "vmd", 00:24:07.553 "config": [] 00:24:07.553 }, 00:24:07.553 { 00:24:07.553 "subsystem": "accel", 00:24:07.553 "config": [ 00:24:07.553 { 00:24:07.553 "method": "accel_set_options", 00:24:07.553 "params": { 00:24:07.553 "small_cache_size": 128, 00:24:07.553 "large_cache_size": 16, 00:24:07.553 "task_count": 2048, 00:24:07.553 "sequence_count": 2048, 00:24:07.553 "buf_count": 2048 00:24:07.553 } 00:24:07.553 } 00:24:07.553 ] 00:24:07.553 }, 00:24:07.553 { 00:24:07.553 "subsystem": "bdev", 00:24:07.553 "config": [ 00:24:07.553 { 00:24:07.553 "method": "bdev_set_options", 00:24:07.553 "params": { 00:24:07.553 "bdev_io_pool_size": 65535, 00:24:07.553 "bdev_io_cache_size": 256, 00:24:07.553 "bdev_auto_examine": true, 00:24:07.553 "iobuf_small_cache_size": 128, 00:24:07.553 "iobuf_large_cache_size": 16 00:24:07.553 } 00:24:07.553 }, 00:24:07.553 { 00:24:07.553 "method": "bdev_raid_set_options", 00:24:07.553 "params": { 00:24:07.553 "process_window_size_kb": 1024, 00:24:07.553 "process_max_bandwidth_mb_sec": 0 00:24:07.553 } 00:24:07.553 }, 00:24:07.553 { 00:24:07.553 "method": "bdev_iscsi_set_options", 00:24:07.553 "params": { 00:24:07.553 "timeout_sec": 30 00:24:07.553 } 00:24:07.553 }, 00:24:07.553 { 00:24:07.553 "method": "bdev_nvme_set_options", 00:24:07.553 "params": { 00:24:07.553 "action_on_timeout": "none", 00:24:07.553 "timeout_us": 0, 00:24:07.553 "timeout_admin_us": 0, 00:24:07.553 "keep_alive_timeout_ms": 10000, 00:24:07.553 "arbitration_burst": 0, 00:24:07.553 "low_priority_weight": 0, 00:24:07.553 "medium_priority_weight": 0, 00:24:07.553 "high_priority_weight": 0, 00:24:07.553 "nvme_adminq_poll_period_us": 10000, 00:24:07.553 "nvme_ioq_poll_period_us": 0, 00:24:07.553 "io_queue_requests": 0, 00:24:07.553 "delay_cmd_submit": true, 00:24:07.553 "transport_retry_count": 4, 00:24:07.553 "bdev_retry_count": 3, 00:24:07.553 "transport_ack_timeout": 0, 00:24:07.553 "ctrlr_loss_timeout_sec": 0, 00:24:07.553 "reconnect_delay_sec": 0, 00:24:07.553 "fast_io_fail_timeout_sec": 0, 00:24:07.553 "disable_auto_failback": false, 00:24:07.553 "generate_uuids": false, 00:24:07.553 "transport_tos": 0, 00:24:07.553 "nvme_error_stat": false, 00:24:07.553 "rdma_srq_size": 0, 00:24:07.553 "io_path_stat": false, 00:24:07.553 "allow_accel_sequence": false, 00:24:07.553 "rdma_max_cq_size": 0, 00:24:07.553 "rdma_cm_event_timeout_ms": 0, 00:24:07.553 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:07.553 "dhchap_digests": [ 00:24:07.553 "sha256", 00:24:07.553 "sha384", 00:24:07.553 "sha512" 00:24:07.553 ], 00:24:07.553 "dhchap_dhgroups": [ 00:24:07.553 "null", 00:24:07.553 "ffdhe2048", 00:24:07.553 "ffdhe3072", 00:24:07.553 "ffdhe4096", 00:24:07.553 "ffdhe6144", 00:24:07.554 "ffdhe8192" 00:24:07.554 ] 00:24:07.554 } 00:24:07.554 }, 00:24:07.554 { 00:24:07.554 "method": "bdev_nvme_set_hotplug", 00:24:07.554 "params": { 00:24:07.554 "period_us": 100000, 00:24:07.554 "enable": false 00:24:07.554 } 00:24:07.554 }, 00:24:07.554 { 00:24:07.554 "method": "bdev_malloc_create", 00:24:07.554 "params": { 00:24:07.554 "name": "malloc0", 00:24:07.554 "num_blocks": 8192, 00:24:07.554 "block_size": 4096, 00:24:07.554 "physical_block_size": 4096, 00:24:07.554 "uuid": "3557e3ef-d274-4cbc-a43b-d2b3f4a52875", 00:24:07.554 "optimal_io_boundary": 0, 00:24:07.554 "md_size": 0, 00:24:07.554 "dif_type": 0, 00:24:07.554 "dif_is_head_of_md": false, 00:24:07.554 "dif_pi_format": 0 00:24:07.554 } 00:24:07.554 }, 00:24:07.554 { 00:24:07.554 "method": "bdev_wait_for_examine" 00:24:07.554 } 00:24:07.554 ] 00:24:07.554 }, 00:24:07.554 { 00:24:07.554 "subsystem": "nbd", 00:24:07.554 "config": [] 00:24:07.554 }, 00:24:07.554 { 00:24:07.554 "subsystem": "scheduler", 00:24:07.554 "config": [ 00:24:07.554 { 00:24:07.554 "method": "framework_set_scheduler", 00:24:07.554 "params": { 00:24:07.554 "name": "static" 00:24:07.554 } 00:24:07.554 } 00:24:07.554 ] 00:24:07.554 }, 00:24:07.554 { 00:24:07.554 "subsystem": "nvmf", 00:24:07.554 "config": [ 00:24:07.554 { 00:24:07.554 "method": "nvmf_set_config", 00:24:07.554 "params": { 00:24:07.554 "discovery_filter": "match_any", 00:24:07.554 "admin_cmd_passthru": { 00:24:07.554 "identify_ctrlr": false 00:24:07.554 }, 00:24:07.554 "dhchap_digests": [ 00:24:07.554 "sha256", 00:24:07.554 "sha384", 00:24:07.554 "sha512" 00:24:07.554 ], 00:24:07.554 "dhchap_dhgroups": [ 00:24:07.554 "null", 00:24:07.554 "ffdhe2048", 00:24:07.554 "ffdhe3072", 00:24:07.554 "ffdhe4096", 00:24:07.554 "ffdhe6144", 00:24:07.554 "ffdhe8192" 00:24:07.554 ] 00:24:07.554 } 00:24:07.554 }, 00:24:07.554 { 00:24:07.554 "method": "nvmf_set_max_subsystems", 00:24:07.554 "params": { 00:24:07.554 "max_subsystems": 1024 00:24:07.554 } 00:24:07.554 }, 00:24:07.554 { 00:24:07.554 "method": "nvmf_set_crdt", 00:24:07.554 "params": { 00:24:07.554 "crdt1": 0, 00:24:07.554 "crdt2": 0, 00:24:07.554 "crdt3": 0 00:24:07.554 } 00:24:07.554 }, 00:24:07.554 { 00:24:07.554 "method": "nvmf_create_transport", 00:24:07.554 "params": { 00:24:07.554 "trtype": "TCP", 00:24:07.554 "max_queue_depth": 128, 00:24:07.554 "max_io_qpairs_per_ctrlr": 127, 00:24:07.554 "in_capsule_data_size": 4096, 00:24:07.554 "max_io_size": 131072, 00:24:07.554 "io_unit_size": 131072, 00:24:07.554 "max_aq_depth": 128, 00:24:07.554 "num_shared_buffers": 511, 00:24:07.554 "buf_cache_size": 4294967295, 00:24:07.554 "dif_insert_or_strip": false, 00:24:07.554 "zcopy": false, 00:24:07.554 "c2h_success": false, 00:24:07.554 "sock_priority": 0, 00:24:07.554 "abort_timeout_sec": 1, 00:24:07.554 "ack_timeout": 0, 00:24:07.554 "data_wr_pool_size": 0 00:24:07.554 } 00:24:07.554 }, 00:24:07.554 { 00:24:07.554 "method": "nvmf_create_subsystem", 00:24:07.554 "params": { 00:24:07.554 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.554 "allow_any_host": false, 00:24:07.554 "serial_number": "SPDK00000000000001", 00:24:07.554 "model_number": "SPDK bdev Controller", 00:24:07.554 "max_namespaces": 10, 00:24:07.554 "min_cntlid": 1, 00:24:07.554 "max_cntlid": 65519, 00:24:07.554 "ana_reporting": false 00:24:07.554 } 00:24:07.554 }, 00:24:07.554 { 00:24:07.554 "method": "nvmf_subsystem_add_host", 00:24:07.554 "params": { 00:24:07.554 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.554 "host": "nqn.2016-06.io.spdk:host1", 00:24:07.554 "psk": "key0" 00:24:07.554 } 00:24:07.554 }, 00:24:07.554 { 00:24:07.554 "method": "nvmf_subsystem_add_ns", 00:24:07.554 "params": { 00:24:07.554 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.554 "namespace": { 00:24:07.554 "nsid": 1, 00:24:07.554 "bdev_name": "malloc0", 00:24:07.554 "nguid": "3557E3EFD2744CBCA43BD2B3F4A52875", 00:24:07.554 "uuid": "3557e3ef-d274-4cbc-a43b-d2b3f4a52875", 00:24:07.554 "no_auto_visible": false 00:24:07.554 } 00:24:07.554 } 00:24:07.554 }, 00:24:07.554 { 00:24:07.554 "method": "nvmf_subsystem_add_listener", 00:24:07.554 "params": { 00:24:07.554 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.554 "listen_address": { 00:24:07.554 "trtype": "TCP", 00:24:07.554 "adrfam": "IPv4", 00:24:07.554 "traddr": "10.0.0.2", 00:24:07.554 "trsvcid": "4420" 00:24:07.554 }, 00:24:07.554 "secure_channel": true 00:24:07.554 } 00:24:07.554 } 00:24:07.554 ] 00:24:07.554 } 00:24:07.554 ] 00:24:07.554 }' 00:24:07.554 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.554 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3004320 00:24:07.554 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:07.554 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3004320 00:24:07.554 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3004320 ']' 00:24:07.554 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.554 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:07.554 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.554 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:07.554 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.554 [2024-12-06 01:01:40.080015] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:24:07.554 [2024-12-06 01:01:40.080200] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.554 [2024-12-06 01:01:40.236764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.813 [2024-12-06 01:01:40.375233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.813 [2024-12-06 01:01:40.375318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.813 [2024-12-06 01:01:40.375352] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.813 [2024-12-06 01:01:40.375377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.813 [2024-12-06 01:01:40.375397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.813 [2024-12-06 01:01:40.377164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.380 [2024-12-06 01:01:40.929237] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.380 [2024-12-06 01:01:40.961260] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:08.380 [2024-12-06 01:01:40.961696] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.380 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.380 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:08.380 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:08.380 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:08.380 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.380 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.380 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3004479 00:24:08.380 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3004479 /var/tmp/bdevperf.sock 00:24:08.380 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3004479 ']' 00:24:08.380 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:08.380 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:08.380 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:08.380 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:08.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:08.380 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:08.380 "subsystems": [ 00:24:08.380 { 00:24:08.380 "subsystem": "keyring", 00:24:08.380 "config": [ 00:24:08.380 { 00:24:08.380 "method": "keyring_file_add_key", 00:24:08.380 "params": { 00:24:08.380 "name": "key0", 00:24:08.380 "path": "/tmp/tmp.sdwf5y7n8J" 00:24:08.380 } 00:24:08.380 } 00:24:08.380 ] 00:24:08.380 }, 00:24:08.380 { 00:24:08.380 "subsystem": "iobuf", 00:24:08.380 "config": [ 00:24:08.380 { 00:24:08.380 "method": "iobuf_set_options", 00:24:08.380 "params": { 00:24:08.380 "small_pool_count": 8192, 00:24:08.380 "large_pool_count": 1024, 00:24:08.380 "small_bufsize": 8192, 00:24:08.380 "large_bufsize": 135168, 00:24:08.380 "enable_numa": false 00:24:08.380 } 00:24:08.380 } 00:24:08.380 ] 00:24:08.380 }, 00:24:08.380 { 00:24:08.380 "subsystem": "sock", 00:24:08.380 "config": [ 00:24:08.380 { 00:24:08.380 "method": "sock_set_default_impl", 00:24:08.380 "params": { 00:24:08.380 "impl_name": "posix" 00:24:08.380 } 00:24:08.380 }, 00:24:08.380 { 00:24:08.380 "method": "sock_impl_set_options", 00:24:08.380 "params": { 00:24:08.380 "impl_name": "ssl", 00:24:08.380 "recv_buf_size": 4096, 00:24:08.380 "send_buf_size": 4096, 00:24:08.380 "enable_recv_pipe": true, 00:24:08.380 "enable_quickack": false, 00:24:08.380 "enable_placement_id": 0, 00:24:08.380 "enable_zerocopy_send_server": true, 00:24:08.380 "enable_zerocopy_send_client": false, 00:24:08.380 "zerocopy_threshold": 0, 00:24:08.380 "tls_version": 0, 00:24:08.380 "enable_ktls": false 00:24:08.380 } 00:24:08.380 }, 00:24:08.380 { 00:24:08.380 "method": "sock_impl_set_options", 00:24:08.380 "params": { 00:24:08.380 "impl_name": "posix", 00:24:08.380 "recv_buf_size": 2097152, 00:24:08.380 "send_buf_size": 2097152, 00:24:08.380 "enable_recv_pipe": true, 00:24:08.380 "enable_quickack": false, 00:24:08.380 "enable_placement_id": 0, 00:24:08.380 "enable_zerocopy_send_server": true, 00:24:08.380 "enable_zerocopy_send_client": false, 00:24:08.380 "zerocopy_threshold": 0, 00:24:08.380 "tls_version": 0, 00:24:08.380 "enable_ktls": false 00:24:08.380 } 00:24:08.380 } 00:24:08.380 ] 00:24:08.380 }, 00:24:08.380 { 00:24:08.380 "subsystem": "vmd", 00:24:08.380 "config": [] 00:24:08.380 }, 00:24:08.380 { 00:24:08.380 "subsystem": "accel", 00:24:08.380 "config": [ 00:24:08.380 { 00:24:08.380 "method": "accel_set_options", 00:24:08.380 "params": { 00:24:08.380 "small_cache_size": 128, 00:24:08.380 "large_cache_size": 16, 00:24:08.380 "task_count": 2048, 00:24:08.380 "sequence_count": 2048, 00:24:08.380 "buf_count": 2048 00:24:08.380 } 00:24:08.380 } 00:24:08.380 ] 00:24:08.380 }, 00:24:08.380 { 00:24:08.380 "subsystem": "bdev", 00:24:08.380 "config": [ 00:24:08.380 { 00:24:08.380 "method": "bdev_set_options", 00:24:08.380 "params": { 00:24:08.380 "bdev_io_pool_size": 65535, 00:24:08.381 "bdev_io_cache_size": 256, 00:24:08.381 "bdev_auto_examine": true, 00:24:08.381 "iobuf_small_cache_size": 128, 00:24:08.381 "iobuf_large_cache_size": 16 00:24:08.381 } 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "method": "bdev_raid_set_options", 00:24:08.381 "params": { 00:24:08.381 "process_window_size_kb": 1024, 00:24:08.381 "process_max_bandwidth_mb_sec": 0 00:24:08.381 } 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "method": "bdev_iscsi_set_options", 00:24:08.381 "params": { 00:24:08.381 "timeout_sec": 30 00:24:08.381 } 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "method": "bdev_nvme_set_options", 00:24:08.381 "params": { 00:24:08.381 "action_on_timeout": "none", 00:24:08.381 "timeout_us": 0, 00:24:08.381 "timeout_admin_us": 0, 00:24:08.381 "keep_alive_timeout_ms": 10000, 00:24:08.381 "arbitration_burst": 0, 00:24:08.381 "low_priority_weight": 0, 00:24:08.381 "medium_priority_weight": 0, 00:24:08.381 "high_priority_weight": 0, 00:24:08.381 "nvme_adminq_poll_period_us": 10000, 00:24:08.381 "nvme_ioq_poll_period_us": 0, 00:24:08.381 "io_queue_requests": 512, 00:24:08.381 "delay_cmd_submit": true, 00:24:08.381 "transport_retry_count": 4, 00:24:08.381 "bdev_retry_count": 3, 00:24:08.381 "transport_ack_timeout": 0, 00:24:08.381 "ctrlr_loss_timeout_sec": 0, 00:24:08.381 "reconnect_delay_sec": 0, 00:24:08.381 "fast_io_fail_timeout_sec": 0, 00:24:08.381 "disable_auto_failback": false, 00:24:08.381 "generate_uuids": false, 00:24:08.381 "transport_tos": 0, 00:24:08.381 "nvme_error_stat": false, 00:24:08.381 "rdma_srq_size": 0, 00:24:08.381 "io_path_stat": false, 00:24:08.381 "allow_accel_sequence": false, 00:24:08.381 "rdma_max_cq_size": 0, 00:24:08.381 "rdma_cm_event_timeout_ms": 0, 00:24:08.381 "dhchap_digests": [ 00:24:08.381 "sha256", 00:24:08.381 "sha384", 00:24:08.381 "sha512" 00:24:08.381 ], 00:24:08.381 "dhchap_dhgroups": [ 00:24:08.381 "null", 00:24:08.381 "ffdhe2048", 00:24:08.381 "ffdhe3072", 00:24:08.381 "ffdhe4096", 00:24:08.381 "ffdhe6144", 00:24:08.381 "ffdhe8192" 00:24:08.381 ] 00:24:08.381 } 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "method": "bdev_nvme_attach_controller", 00:24:08.381 "params": { 00:24:08.381 "name": "TLSTEST", 00:24:08.381 "trtype": "TCP", 00:24:08.381 "adrfam": "IPv4", 00:24:08.381 "traddr": "10.0.0.2", 00:24:08.381 "trsvcid": "4420", 00:24:08.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:08.381 "prchk_reftag": false, 00:24:08.381 "prchk_guard": false, 00:24:08.381 "ctrlr_loss_timeout_sec": 0, 00:24:08.381 "reconnect_delay_sec": 0, 00:24:08.381 "fast_io_fail_timeout_sec": 0, 00:24:08.381 "psk": "key0", 00:24:08.381 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:08.381 "hdgst": false, 00:24:08.381 "ddgst": false, 00:24:08.381 "multipath": "multipath" 00:24:08.381 } 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "method": "bdev_nvme_set_hotplug", 00:24:08.381 "params": { 00:24:08.381 "period_us": 100000, 00:24:08.381 "enable": false 00:24:08.381 } 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "method": "bdev_wait_for_examine" 00:24:08.381 } 00:24:08.381 ] 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "subsystem": "nbd", 00:24:08.381 "config": [] 00:24:08.381 } 00:24:08.381 ] 00:24:08.381 }' 00:24:08.381 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:08.381 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.641 [2024-12-06 01:01:41.143525] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:24:08.641 [2024-12-06 01:01:41.143668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3004479 ] 00:24:08.641 [2024-12-06 01:01:41.275292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.900 [2024-12-06 01:01:41.395050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.159 [2024-12-06 01:01:41.805807] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:09.418 01:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.418 01:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:09.418 01:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:09.675 Running I/O for 10 seconds... 00:24:11.550 2546.00 IOPS, 9.95 MiB/s [2024-12-06T00:01:45.641Z] 2607.00 IOPS, 10.18 MiB/s [2024-12-06T00:01:46.577Z] 2616.00 IOPS, 10.22 MiB/s [2024-12-06T00:01:47.517Z] 2618.75 IOPS, 10.23 MiB/s [2024-12-06T00:01:48.452Z] 2624.20 IOPS, 10.25 MiB/s [2024-12-06T00:01:49.458Z] 2627.83 IOPS, 10.26 MiB/s [2024-12-06T00:01:50.398Z] 2628.43 IOPS, 10.27 MiB/s [2024-12-06T00:01:51.338Z] 2623.12 IOPS, 10.25 MiB/s [2024-12-06T00:01:52.276Z] 2622.44 IOPS, 10.24 MiB/s [2024-12-06T00:01:52.276Z] 2619.40 IOPS, 10.23 MiB/s 00:24:19.567 Latency(us) 00:24:19.567 [2024-12-06T00:01:52.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.567 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:19.567 Verification LBA range: start 0x0 length 0x2000 00:24:19.567 TLSTESTn1 : 10.03 2625.27 10.25 0.00 0.00 48665.98 10048.85 44661.57 00:24:19.567 [2024-12-06T00:01:52.276Z] =================================================================================================================== 00:24:19.567 [2024-12-06T00:01:52.276Z] Total : 2625.27 10.25 0.00 0.00 48665.98 10048.85 44661.57 00:24:19.567 { 00:24:19.567 "results": [ 00:24:19.567 { 00:24:19.567 "job": "TLSTESTn1", 00:24:19.567 "core_mask": "0x4", 00:24:19.567 "workload": "verify", 00:24:19.567 "status": "finished", 00:24:19.567 "verify_range": { 00:24:19.567 "start": 0, 00:24:19.567 "length": 8192 00:24:19.567 }, 00:24:19.567 "queue_depth": 128, 00:24:19.567 "io_size": 4096, 00:24:19.567 "runtime": 10.026385, 00:24:19.567 "iops": 2625.2732166179535, 00:24:19.567 "mibps": 10.25497350241388, 00:24:19.567 "io_failed": 0, 00:24:19.567 "io_timeout": 0, 00:24:19.567 "avg_latency_us": 48665.98195363968, 00:24:19.567 "min_latency_us": 10048.853333333333, 00:24:19.567 "max_latency_us": 44661.57037037037 00:24:19.567 } 00:24:19.567 ], 00:24:19.567 "core_count": 1 00:24:19.567 } 00:24:19.826 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:19.827 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3004479 00:24:19.827 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3004479 ']' 00:24:19.827 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3004479 00:24:19.827 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:19.827 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:19.827 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3004479 00:24:19.827 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:19.827 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:19.827 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3004479' 00:24:19.827 killing process with pid 3004479 00:24:19.827 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3004479 00:24:19.827 Received shutdown signal, test time was about 10.000000 seconds 00:24:19.827 00:24:19.827 Latency(us) 00:24:19.827 [2024-12-06T00:01:52.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.827 [2024-12-06T00:01:52.536Z] =================================================================================================================== 00:24:19.827 [2024-12-06T00:01:52.536Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:19.827 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3004479 00:24:20.766 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3004320 00:24:20.766 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3004320 ']' 00:24:20.766 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3004320 00:24:20.766 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:20.766 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:20.766 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3004320 00:24:20.766 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:20.767 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:20.767 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3004320' 00:24:20.767 killing process with pid 3004320 00:24:20.767 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3004320 00:24:20.767 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3004320 00:24:22.143 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:22.143 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:22.143 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:22.143 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.144 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3005949 00:24:22.144 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:22.144 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3005949 00:24:22.144 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3005949 ']' 00:24:22.144 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.144 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.144 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.144 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.144 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.144 [2024-12-06 01:01:54.636127] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:24:22.144 [2024-12-06 01:01:54.636281] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.144 [2024-12-06 01:01:54.789310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.403 [2024-12-06 01:01:54.926174] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.403 [2024-12-06 01:01:54.926285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.403 [2024-12-06 01:01:54.926311] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.403 [2024-12-06 01:01:54.926336] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.403 [2024-12-06 01:01:54.926361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.403 [2024-12-06 01:01:54.928153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.967 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:22.967 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:22.967 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:22.967 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:22.967 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.967 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:22.967 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.sdwf5y7n8J 00:24:22.967 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.sdwf5y7n8J 00:24:22.967 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:23.531 [2024-12-06 01:01:55.940990] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.531 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:23.788 01:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:24.046 [2024-12-06 01:01:56.514554] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:24.046 [2024-12-06 01:01:56.514962] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.046 01:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:24.305 malloc0 00:24:24.305 01:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:24.563 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.sdwf5y7n8J 00:24:24.821 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:25.080 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3006367 00:24:25.080 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:25.080 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:25.080 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3006367 /var/tmp/bdevperf.sock 00:24:25.080 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3006367 ']' 00:24:25.080 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:25.080 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:25.080 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:25.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:25.080 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:25.080 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.341 [2024-12-06 01:01:57.796506] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:24:25.341 [2024-12-06 01:01:57.796646] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3006367 ] 00:24:25.341 [2024-12-06 01:01:57.931383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.602 [2024-12-06 01:01:58.053937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.169 01:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.169 01:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:26.170 01:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sdwf5y7n8J 00:24:26.428 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:26.687 [2024-12-06 01:01:59.364993] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:26.946 nvme0n1 00:24:26.946 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:26.946 Running I/O for 1 seconds... 00:24:28.142 2026.00 IOPS, 7.91 MiB/s 00:24:28.142 Latency(us) 00:24:28.142 [2024-12-06T00:02:00.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.142 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:28.142 Verification LBA range: start 0x0 length 0x2000 00:24:28.142 nvme0n1 : 1.03 2096.23 8.19 0.00 0.00 60402.95 10874.12 54370.61 00:24:28.142 [2024-12-06T00:02:00.851Z] =================================================================================================================== 00:24:28.142 [2024-12-06T00:02:00.851Z] Total : 2096.23 8.19 0.00 0.00 60402.95 10874.12 54370.61 00:24:28.142 { 00:24:28.142 "results": [ 00:24:28.142 { 00:24:28.142 "job": "nvme0n1", 00:24:28.142 "core_mask": "0x2", 00:24:28.142 "workload": "verify", 00:24:28.142 "status": "finished", 00:24:28.142 "verify_range": { 00:24:28.142 "start": 0, 00:24:28.142 "length": 8192 00:24:28.142 }, 00:24:28.142 "queue_depth": 128, 00:24:28.142 "io_size": 4096, 00:24:28.142 "runtime": 1.028037, 00:24:28.142 "iops": 2096.228054048638, 00:24:28.142 "mibps": 8.188390836127493, 00:24:28.142 "io_failed": 0, 00:24:28.142 "io_timeout": 0, 00:24:28.142 "avg_latency_us": 60402.94925771246, 00:24:28.142 "min_latency_us": 10874.121481481481, 00:24:28.142 "max_latency_us": 54370.607407407406 00:24:28.142 } 00:24:28.142 ], 00:24:28.142 "core_count": 1 00:24:28.142 } 00:24:28.142 01:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3006367 00:24:28.142 01:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3006367 ']' 00:24:28.142 01:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3006367 00:24:28.142 01:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:28.142 01:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.142 01:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3006367 00:24:28.142 01:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:28.142 01:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:28.142 01:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3006367' 00:24:28.142 killing process with pid 3006367 00:24:28.142 01:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3006367 00:24:28.142 Received shutdown signal, test time was about 1.000000 seconds 00:24:28.142 00:24:28.142 Latency(us) 00:24:28.142 [2024-12-06T00:02:00.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.142 [2024-12-06T00:02:00.851Z] =================================================================================================================== 00:24:28.142 [2024-12-06T00:02:00.851Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:28.142 01:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3006367 00:24:29.083 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3005949 00:24:29.083 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3005949 ']' 00:24:29.083 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3005949 00:24:29.083 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:29.083 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.083 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3005949 00:24:29.083 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:29.083 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:29.083 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3005949' 00:24:29.083 killing process with pid 3005949 00:24:29.083 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3005949 00:24:29.083 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3005949 00:24:30.460 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:30.460 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:30.460 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:30.460 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.460 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3007023 00:24:30.460 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:30.460 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3007023 00:24:30.460 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3007023 ']' 00:24:30.460 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.460 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:30.460 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.460 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:30.460 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.460 [2024-12-06 01:02:02.913358] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:24:30.460 [2024-12-06 01:02:02.913507] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.460 [2024-12-06 01:02:03.068773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.719 [2024-12-06 01:02:03.199448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.719 [2024-12-06 01:02:03.199537] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.719 [2024-12-06 01:02:03.199562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.719 [2024-12-06 01:02:03.199587] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.719 [2024-12-06 01:02:03.199606] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.719 [2024-12-06 01:02:03.201237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.285 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:31.285 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:31.285 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:31.285 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:31.285 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.285 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.285 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:31.285 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.285 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.285 [2024-12-06 01:02:03.888289] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.285 malloc0 00:24:31.285 [2024-12-06 01:02:03.950808] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:31.285 [2024-12-06 01:02:03.951231] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.285 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.285 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3007179 00:24:31.285 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3007179 /var/tmp/bdevperf.sock 00:24:31.285 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:31.285 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3007179 ']' 00:24:31.285 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.285 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.285 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.285 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.286 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.545 [2024-12-06 01:02:04.058075] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:24:31.545 [2024-12-06 01:02:04.058219] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3007179 ] 00:24:31.545 [2024-12-06 01:02:04.199131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.804 [2024-12-06 01:02:04.335897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.370 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.370 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:32.370 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sdwf5y7n8J 00:24:32.628 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:32.887 [2024-12-06 01:02:05.554612] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:33.147 nvme0n1 00:24:33.147 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:33.147 Running I/O for 1 seconds... 00:24:34.345 2446.00 IOPS, 9.55 MiB/s 00:24:34.345 Latency(us) 00:24:34.345 [2024-12-06T00:02:07.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.345 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:34.345 Verification LBA range: start 0x0 length 0x2000 00:24:34.345 nvme0n1 : 1.03 2500.84 9.77 0.00 0.00 50452.32 8932.31 38836.15 00:24:34.345 [2024-12-06T00:02:07.054Z] =================================================================================================================== 00:24:34.345 [2024-12-06T00:02:07.054Z] Total : 2500.84 9.77 0.00 0.00 50452.32 8932.31 38836.15 00:24:34.345 { 00:24:34.345 "results": [ 00:24:34.345 { 00:24:34.345 "job": "nvme0n1", 00:24:34.345 "core_mask": "0x2", 00:24:34.345 "workload": "verify", 00:24:34.345 "status": "finished", 00:24:34.345 "verify_range": { 00:24:34.345 "start": 0, 00:24:34.345 "length": 8192 00:24:34.345 }, 00:24:34.345 "queue_depth": 128, 00:24:34.345 "io_size": 4096, 00:24:34.345 "runtime": 1.029652, 00:24:34.345 "iops": 2500.8449456709645, 00:24:34.345 "mibps": 9.768925569027205, 00:24:34.345 "io_failed": 0, 00:24:34.345 "io_timeout": 0, 00:24:34.345 "avg_latency_us": 50452.32284674577, 00:24:34.345 "min_latency_us": 8932.314074074075, 00:24:34.345 "max_latency_us": 38836.148148148146 00:24:34.345 } 00:24:34.345 ], 00:24:34.345 "core_count": 1 00:24:34.345 } 00:24:34.345 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:34.345 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.345 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.345 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.345 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:34.345 "subsystems": [ 00:24:34.345 { 00:24:34.345 "subsystem": "keyring", 00:24:34.345 "config": [ 00:24:34.345 { 00:24:34.345 "method": "keyring_file_add_key", 00:24:34.345 "params": { 00:24:34.345 "name": "key0", 00:24:34.345 "path": "/tmp/tmp.sdwf5y7n8J" 00:24:34.345 } 00:24:34.345 } 00:24:34.345 ] 00:24:34.345 }, 00:24:34.345 { 00:24:34.345 "subsystem": "iobuf", 00:24:34.345 "config": [ 00:24:34.345 { 00:24:34.345 "method": "iobuf_set_options", 00:24:34.345 "params": { 00:24:34.345 "small_pool_count": 8192, 00:24:34.345 "large_pool_count": 1024, 00:24:34.345 "small_bufsize": 8192, 00:24:34.345 "large_bufsize": 135168, 00:24:34.345 "enable_numa": false 00:24:34.346 } 00:24:34.346 } 00:24:34.346 ] 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "subsystem": "sock", 00:24:34.346 "config": [ 00:24:34.346 { 00:24:34.346 "method": "sock_set_default_impl", 00:24:34.346 "params": { 00:24:34.346 "impl_name": "posix" 00:24:34.346 } 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "method": "sock_impl_set_options", 00:24:34.346 "params": { 00:24:34.346 "impl_name": "ssl", 00:24:34.346 "recv_buf_size": 4096, 00:24:34.346 "send_buf_size": 4096, 00:24:34.346 "enable_recv_pipe": true, 00:24:34.346 "enable_quickack": false, 00:24:34.346 "enable_placement_id": 0, 00:24:34.346 "enable_zerocopy_send_server": true, 00:24:34.346 "enable_zerocopy_send_client": false, 00:24:34.346 "zerocopy_threshold": 0, 00:24:34.346 "tls_version": 0, 00:24:34.346 "enable_ktls": false 00:24:34.346 } 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "method": "sock_impl_set_options", 00:24:34.346 "params": { 00:24:34.346 "impl_name": "posix", 00:24:34.346 "recv_buf_size": 2097152, 00:24:34.346 "send_buf_size": 2097152, 00:24:34.346 "enable_recv_pipe": true, 00:24:34.346 "enable_quickack": false, 00:24:34.346 "enable_placement_id": 0, 00:24:34.346 "enable_zerocopy_send_server": true, 00:24:34.346 "enable_zerocopy_send_client": false, 00:24:34.346 "zerocopy_threshold": 0, 00:24:34.346 "tls_version": 0, 00:24:34.346 "enable_ktls": false 00:24:34.346 } 00:24:34.346 } 00:24:34.346 ] 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "subsystem": "vmd", 00:24:34.346 "config": [] 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "subsystem": "accel", 00:24:34.346 "config": [ 00:24:34.346 { 00:24:34.346 "method": "accel_set_options", 00:24:34.346 "params": { 00:24:34.346 "small_cache_size": 128, 00:24:34.346 "large_cache_size": 16, 00:24:34.346 "task_count": 2048, 00:24:34.346 "sequence_count": 2048, 00:24:34.346 "buf_count": 2048 00:24:34.346 } 00:24:34.346 } 00:24:34.346 ] 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "subsystem": "bdev", 00:24:34.346 "config": [ 00:24:34.346 { 00:24:34.346 "method": "bdev_set_options", 00:24:34.346 "params": { 00:24:34.346 "bdev_io_pool_size": 65535, 00:24:34.346 "bdev_io_cache_size": 256, 00:24:34.346 "bdev_auto_examine": true, 00:24:34.346 "iobuf_small_cache_size": 128, 00:24:34.346 "iobuf_large_cache_size": 16 00:24:34.346 } 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "method": "bdev_raid_set_options", 00:24:34.346 "params": { 00:24:34.346 "process_window_size_kb": 1024, 00:24:34.346 "process_max_bandwidth_mb_sec": 0 00:24:34.346 } 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "method": "bdev_iscsi_set_options", 00:24:34.346 "params": { 00:24:34.346 "timeout_sec": 30 00:24:34.346 } 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "method": "bdev_nvme_set_options", 00:24:34.346 "params": { 00:24:34.346 "action_on_timeout": "none", 00:24:34.346 "timeout_us": 0, 00:24:34.346 "timeout_admin_us": 0, 00:24:34.346 "keep_alive_timeout_ms": 10000, 00:24:34.346 "arbitration_burst": 0, 00:24:34.346 "low_priority_weight": 0, 00:24:34.346 "medium_priority_weight": 0, 00:24:34.346 "high_priority_weight": 0, 00:24:34.346 "nvme_adminq_poll_period_us": 10000, 00:24:34.346 "nvme_ioq_poll_period_us": 0, 00:24:34.346 "io_queue_requests": 0, 00:24:34.346 "delay_cmd_submit": true, 00:24:34.346 "transport_retry_count": 4, 00:24:34.346 "bdev_retry_count": 3, 00:24:34.346 "transport_ack_timeout": 0, 00:24:34.346 "ctrlr_loss_timeout_sec": 0, 00:24:34.346 "reconnect_delay_sec": 0, 00:24:34.346 "fast_io_fail_timeout_sec": 0, 00:24:34.346 "disable_auto_failback": false, 00:24:34.346 "generate_uuids": false, 00:24:34.346 "transport_tos": 0, 00:24:34.346 "nvme_error_stat": false, 00:24:34.346 "rdma_srq_size": 0, 00:24:34.346 "io_path_stat": false, 00:24:34.346 "allow_accel_sequence": false, 00:24:34.346 "rdma_max_cq_size": 0, 00:24:34.346 "rdma_cm_event_timeout_ms": 0, 00:24:34.346 "dhchap_digests": [ 00:24:34.346 "sha256", 00:24:34.346 "sha384", 00:24:34.346 "sha512" 00:24:34.346 ], 00:24:34.346 "dhchap_dhgroups": [ 00:24:34.346 "null", 00:24:34.346 "ffdhe2048", 00:24:34.346 "ffdhe3072", 00:24:34.346 "ffdhe4096", 00:24:34.346 "ffdhe6144", 00:24:34.346 "ffdhe8192" 00:24:34.346 ] 00:24:34.346 } 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "method": "bdev_nvme_set_hotplug", 00:24:34.346 "params": { 00:24:34.346 "period_us": 100000, 00:24:34.346 "enable": false 00:24:34.346 } 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "method": "bdev_malloc_create", 00:24:34.346 "params": { 00:24:34.346 "name": "malloc0", 00:24:34.346 "num_blocks": 8192, 00:24:34.346 "block_size": 4096, 00:24:34.346 "physical_block_size": 4096, 00:24:34.346 "uuid": "de33a107-9999-4c27-9788-bab8deeb2b71", 00:24:34.346 "optimal_io_boundary": 0, 00:24:34.346 "md_size": 0, 00:24:34.346 "dif_type": 0, 00:24:34.346 "dif_is_head_of_md": false, 00:24:34.346 "dif_pi_format": 0 00:24:34.346 } 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "method": "bdev_wait_for_examine" 00:24:34.346 } 00:24:34.346 ] 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "subsystem": "nbd", 00:24:34.346 "config": [] 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "subsystem": "scheduler", 00:24:34.346 "config": [ 00:24:34.346 { 00:24:34.346 "method": "framework_set_scheduler", 00:24:34.346 "params": { 00:24:34.346 "name": "static" 00:24:34.346 } 00:24:34.346 } 00:24:34.346 ] 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "subsystem": "nvmf", 00:24:34.346 "config": [ 00:24:34.346 { 00:24:34.346 "method": "nvmf_set_config", 00:24:34.346 "params": { 00:24:34.346 "discovery_filter": "match_any", 00:24:34.346 "admin_cmd_passthru": { 00:24:34.346 "identify_ctrlr": false 00:24:34.346 }, 00:24:34.346 "dhchap_digests": [ 00:24:34.346 "sha256", 00:24:34.346 "sha384", 00:24:34.346 "sha512" 00:24:34.346 ], 00:24:34.346 "dhchap_dhgroups": [ 00:24:34.346 "null", 00:24:34.346 "ffdhe2048", 00:24:34.346 "ffdhe3072", 00:24:34.346 "ffdhe4096", 00:24:34.346 "ffdhe6144", 00:24:34.346 "ffdhe8192" 00:24:34.346 ] 00:24:34.346 } 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "method": "nvmf_set_max_subsystems", 00:24:34.346 "params": { 00:24:34.346 "max_subsystems": 1024 00:24:34.346 } 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "method": "nvmf_set_crdt", 00:24:34.346 "params": { 00:24:34.346 "crdt1": 0, 00:24:34.346 "crdt2": 0, 00:24:34.346 "crdt3": 0 00:24:34.346 } 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "method": "nvmf_create_transport", 00:24:34.346 "params": { 00:24:34.346 "trtype": "TCP", 00:24:34.346 "max_queue_depth": 128, 00:24:34.346 "max_io_qpairs_per_ctrlr": 127, 00:24:34.346 "in_capsule_data_size": 4096, 00:24:34.346 "max_io_size": 131072, 00:24:34.346 "io_unit_size": 131072, 00:24:34.346 "max_aq_depth": 128, 00:24:34.346 "num_shared_buffers": 511, 00:24:34.346 "buf_cache_size": 4294967295, 00:24:34.346 "dif_insert_or_strip": false, 00:24:34.346 "zcopy": false, 00:24:34.346 "c2h_success": false, 00:24:34.346 "sock_priority": 0, 00:24:34.346 "abort_timeout_sec": 1, 00:24:34.346 "ack_timeout": 0, 00:24:34.346 "data_wr_pool_size": 0 00:24:34.346 } 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "method": "nvmf_create_subsystem", 00:24:34.346 "params": { 00:24:34.346 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.346 "allow_any_host": false, 00:24:34.346 "serial_number": "00000000000000000000", 00:24:34.346 "model_number": "SPDK bdev Controller", 00:24:34.346 "max_namespaces": 32, 00:24:34.346 "min_cntlid": 1, 00:24:34.346 "max_cntlid": 65519, 00:24:34.346 "ana_reporting": false 00:24:34.346 } 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "method": "nvmf_subsystem_add_host", 00:24:34.346 "params": { 00:24:34.346 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.346 "host": "nqn.2016-06.io.spdk:host1", 00:24:34.346 "psk": "key0" 00:24:34.346 } 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "method": "nvmf_subsystem_add_ns", 00:24:34.346 "params": { 00:24:34.346 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.346 "namespace": { 00:24:34.346 "nsid": 1, 00:24:34.346 "bdev_name": "malloc0", 00:24:34.346 "nguid": "DE33A10799994C279788BAB8DEEB2B71", 00:24:34.346 "uuid": "de33a107-9999-4c27-9788-bab8deeb2b71", 00:24:34.346 "no_auto_visible": false 00:24:34.346 } 00:24:34.346 } 00:24:34.346 }, 00:24:34.346 { 00:24:34.346 "method": "nvmf_subsystem_add_listener", 00:24:34.346 "params": { 00:24:34.346 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.346 "listen_address": { 00:24:34.346 "trtype": "TCP", 00:24:34.346 "adrfam": "IPv4", 00:24:34.346 "traddr": "10.0.0.2", 00:24:34.346 "trsvcid": "4420" 00:24:34.346 }, 00:24:34.346 "secure_channel": false, 00:24:34.346 "sock_impl": "ssl" 00:24:34.346 } 00:24:34.346 } 00:24:34.346 ] 00:24:34.346 } 00:24:34.346 ] 00:24:34.346 }' 00:24:34.347 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:34.605 01:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:34.605 "subsystems": [ 00:24:34.605 { 00:24:34.605 "subsystem": "keyring", 00:24:34.605 "config": [ 00:24:34.605 { 00:24:34.606 "method": "keyring_file_add_key", 00:24:34.606 "params": { 00:24:34.606 "name": "key0", 00:24:34.606 "path": "/tmp/tmp.sdwf5y7n8J" 00:24:34.606 } 00:24:34.606 } 00:24:34.606 ] 00:24:34.606 }, 00:24:34.606 { 00:24:34.606 "subsystem": "iobuf", 00:24:34.606 "config": [ 00:24:34.606 { 00:24:34.606 "method": "iobuf_set_options", 00:24:34.606 "params": { 00:24:34.606 "small_pool_count": 8192, 00:24:34.606 "large_pool_count": 1024, 00:24:34.606 "small_bufsize": 8192, 00:24:34.606 "large_bufsize": 135168, 00:24:34.606 "enable_numa": false 00:24:34.606 } 00:24:34.606 } 00:24:34.606 ] 00:24:34.606 }, 00:24:34.606 { 00:24:34.606 "subsystem": "sock", 00:24:34.606 "config": [ 00:24:34.606 { 00:24:34.606 "method": "sock_set_default_impl", 00:24:34.606 "params": { 00:24:34.606 "impl_name": "posix" 00:24:34.606 } 00:24:34.606 }, 00:24:34.606 { 00:24:34.606 "method": "sock_impl_set_options", 00:24:34.606 "params": { 00:24:34.606 "impl_name": "ssl", 00:24:34.606 "recv_buf_size": 4096, 00:24:34.606 "send_buf_size": 4096, 00:24:34.606 "enable_recv_pipe": true, 00:24:34.606 "enable_quickack": false, 00:24:34.606 "enable_placement_id": 0, 00:24:34.606 "enable_zerocopy_send_server": true, 00:24:34.606 "enable_zerocopy_send_client": false, 00:24:34.606 "zerocopy_threshold": 0, 00:24:34.606 "tls_version": 0, 00:24:34.606 "enable_ktls": false 00:24:34.606 } 00:24:34.606 }, 00:24:34.606 { 00:24:34.606 "method": "sock_impl_set_options", 00:24:34.606 "params": { 00:24:34.606 "impl_name": "posix", 00:24:34.606 "recv_buf_size": 2097152, 00:24:34.606 "send_buf_size": 2097152, 00:24:34.606 "enable_recv_pipe": true, 00:24:34.606 "enable_quickack": false, 00:24:34.606 "enable_placement_id": 0, 00:24:34.606 "enable_zerocopy_send_server": true, 00:24:34.606 "enable_zerocopy_send_client": false, 00:24:34.606 "zerocopy_threshold": 0, 00:24:34.606 "tls_version": 0, 00:24:34.606 "enable_ktls": false 00:24:34.606 } 00:24:34.606 } 00:24:34.606 ] 00:24:34.606 }, 00:24:34.606 { 00:24:34.606 "subsystem": "vmd", 00:24:34.606 "config": [] 00:24:34.606 }, 00:24:34.606 { 00:24:34.606 "subsystem": "accel", 00:24:34.606 "config": [ 00:24:34.606 { 00:24:34.606 "method": "accel_set_options", 00:24:34.606 "params": { 00:24:34.606 "small_cache_size": 128, 00:24:34.606 "large_cache_size": 16, 00:24:34.606 "task_count": 2048, 00:24:34.606 "sequence_count": 2048, 00:24:34.606 "buf_count": 2048 00:24:34.606 } 00:24:34.606 } 00:24:34.606 ] 00:24:34.606 }, 00:24:34.606 { 00:24:34.606 "subsystem": "bdev", 00:24:34.606 "config": [ 00:24:34.606 { 00:24:34.606 "method": "bdev_set_options", 00:24:34.606 "params": { 00:24:34.606 "bdev_io_pool_size": 65535, 00:24:34.606 "bdev_io_cache_size": 256, 00:24:34.606 "bdev_auto_examine": true, 00:24:34.606 "iobuf_small_cache_size": 128, 00:24:34.606 "iobuf_large_cache_size": 16 00:24:34.606 } 00:24:34.606 }, 00:24:34.606 { 00:24:34.606 "method": "bdev_raid_set_options", 00:24:34.606 "params": { 00:24:34.606 "process_window_size_kb": 1024, 00:24:34.606 "process_max_bandwidth_mb_sec": 0 00:24:34.606 } 00:24:34.606 }, 00:24:34.606 { 00:24:34.606 "method": "bdev_iscsi_set_options", 00:24:34.606 "params": { 00:24:34.606 "timeout_sec": 30 00:24:34.606 } 00:24:34.606 }, 00:24:34.606 { 00:24:34.606 "method": "bdev_nvme_set_options", 00:24:34.606 "params": { 00:24:34.606 "action_on_timeout": "none", 00:24:34.606 "timeout_us": 0, 00:24:34.606 "timeout_admin_us": 0, 00:24:34.606 "keep_alive_timeout_ms": 10000, 00:24:34.606 "arbitration_burst": 0, 00:24:34.606 "low_priority_weight": 0, 00:24:34.606 "medium_priority_weight": 0, 00:24:34.606 "high_priority_weight": 0, 00:24:34.606 "nvme_adminq_poll_period_us": 10000, 00:24:34.606 "nvme_ioq_poll_period_us": 0, 00:24:34.606 "io_queue_requests": 512, 00:24:34.606 "delay_cmd_submit": true, 00:24:34.606 "transport_retry_count": 4, 00:24:34.606 "bdev_retry_count": 3, 00:24:34.606 "transport_ack_timeout": 0, 00:24:34.606 "ctrlr_loss_timeout_sec": 0, 00:24:34.606 "reconnect_delay_sec": 0, 00:24:34.606 "fast_io_fail_timeout_sec": 0, 00:24:34.606 "disable_auto_failback": false, 00:24:34.606 "generate_uuids": false, 00:24:34.606 "transport_tos": 0, 00:24:34.606 "nvme_error_stat": false, 00:24:34.606 "rdma_srq_size": 0, 00:24:34.606 "io_path_stat": false, 00:24:34.606 "allow_accel_sequence": false, 00:24:34.606 "rdma_max_cq_size": 0, 00:24:34.606 "rdma_cm_event_timeout_ms": 0, 00:24:34.606 "dhchap_digests": [ 00:24:34.606 "sha256", 00:24:34.606 "sha384", 00:24:34.606 "sha512" 00:24:34.606 ], 00:24:34.606 "dhchap_dhgroups": [ 00:24:34.606 "null", 00:24:34.606 "ffdhe2048", 00:24:34.606 "ffdhe3072", 00:24:34.606 "ffdhe4096", 00:24:34.606 "ffdhe6144", 00:24:34.606 "ffdhe8192" 00:24:34.606 ] 00:24:34.606 } 00:24:34.606 }, 00:24:34.606 { 00:24:34.606 "method": "bdev_nvme_attach_controller", 00:24:34.606 "params": { 00:24:34.606 "name": "nvme0", 00:24:34.606 "trtype": "TCP", 00:24:34.606 "adrfam": "IPv4", 00:24:34.606 "traddr": "10.0.0.2", 00:24:34.606 "trsvcid": "4420", 00:24:34.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.606 "prchk_reftag": false, 00:24:34.606 "prchk_guard": false, 00:24:34.606 "ctrlr_loss_timeout_sec": 0, 00:24:34.606 "reconnect_delay_sec": 0, 00:24:34.606 "fast_io_fail_timeout_sec": 0, 00:24:34.606 "psk": "key0", 00:24:34.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:34.606 "hdgst": false, 00:24:34.606 "ddgst": false, 00:24:34.606 "multipath": "multipath" 00:24:34.606 } 00:24:34.606 }, 00:24:34.606 { 00:24:34.606 "method": "bdev_nvme_set_hotplug", 00:24:34.606 "params": { 00:24:34.606 "period_us": 100000, 00:24:34.606 "enable": false 00:24:34.606 } 00:24:34.606 }, 00:24:34.606 { 00:24:34.606 "method": "bdev_enable_histogram", 00:24:34.606 "params": { 00:24:34.606 "name": "nvme0n1", 00:24:34.606 "enable": true 00:24:34.606 } 00:24:34.606 }, 00:24:34.606 { 00:24:34.606 "method": "bdev_wait_for_examine" 00:24:34.606 } 00:24:34.606 ] 00:24:34.606 }, 00:24:34.606 { 00:24:34.606 "subsystem": "nbd", 00:24:34.606 "config": [] 00:24:34.606 } 00:24:34.606 ] 00:24:34.606 }' 00:24:34.606 01:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3007179 00:24:34.606 01:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3007179 ']' 00:24:34.606 01:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3007179 00:24:34.606 01:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:34.606 01:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:34.606 01:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3007179 00:24:34.606 01:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:34.606 01:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:34.606 01:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3007179' 00:24:34.606 killing process with pid 3007179 00:24:34.606 01:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3007179 00:24:34.606 Received shutdown signal, test time was about 1.000000 seconds 00:24:34.606 00:24:34.606 Latency(us) 00:24:34.606 [2024-12-06T00:02:07.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.606 [2024-12-06T00:02:07.315Z] =================================================================================================================== 00:24:34.606 [2024-12-06T00:02:07.316Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:34.607 01:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3007179 00:24:35.546 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3007023 00:24:35.546 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3007023 ']' 00:24:35.546 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3007023 00:24:35.546 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:35.546 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:35.546 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3007023 00:24:35.805 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:35.805 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:35.805 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3007023' 00:24:35.805 killing process with pid 3007023 00:24:35.805 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3007023 00:24:35.805 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3007023 00:24:36.742 01:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:36.742 01:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:36.742 "subsystems": [ 00:24:36.742 { 00:24:36.742 "subsystem": "keyring", 00:24:36.742 "config": [ 00:24:36.742 { 00:24:36.742 "method": "keyring_file_add_key", 00:24:36.742 "params": { 00:24:36.742 "name": "key0", 00:24:36.742 "path": "/tmp/tmp.sdwf5y7n8J" 00:24:36.742 } 00:24:36.742 } 00:24:36.742 ] 00:24:36.742 }, 00:24:36.742 { 00:24:36.742 "subsystem": "iobuf", 00:24:36.742 "config": [ 00:24:36.742 { 00:24:36.742 "method": "iobuf_set_options", 00:24:36.742 "params": { 00:24:36.742 "small_pool_count": 8192, 00:24:36.742 "large_pool_count": 1024, 00:24:36.742 "small_bufsize": 8192, 00:24:36.742 "large_bufsize": 135168, 00:24:36.742 "enable_numa": false 00:24:36.742 } 00:24:36.742 } 00:24:36.742 ] 00:24:36.742 }, 00:24:36.742 { 00:24:36.742 "subsystem": "sock", 00:24:36.742 "config": [ 00:24:36.742 { 00:24:36.742 "method": "sock_set_default_impl", 00:24:36.742 "params": { 00:24:36.742 "impl_name": "posix" 00:24:36.742 } 00:24:36.742 }, 00:24:36.742 { 00:24:36.742 "method": "sock_impl_set_options", 00:24:36.742 "params": { 00:24:36.742 "impl_name": "ssl", 00:24:36.742 "recv_buf_size": 4096, 00:24:36.742 "send_buf_size": 4096, 00:24:36.742 "enable_recv_pipe": true, 00:24:36.742 "enable_quickack": false, 00:24:36.742 "enable_placement_id": 0, 00:24:36.742 "enable_zerocopy_send_server": true, 00:24:36.742 "enable_zerocopy_send_client": false, 00:24:36.742 "zerocopy_threshold": 0, 00:24:36.742 "tls_version": 0, 00:24:36.742 "enable_ktls": false 00:24:36.742 } 00:24:36.742 }, 00:24:36.742 { 00:24:36.742 "method": "sock_impl_set_options", 00:24:36.742 "params": { 00:24:36.742 "impl_name": "posix", 00:24:36.742 "recv_buf_size": 2097152, 00:24:36.742 "send_buf_size": 2097152, 00:24:36.742 "enable_recv_pipe": true, 00:24:36.742 "enable_quickack": false, 00:24:36.742 "enable_placement_id": 0, 00:24:36.742 "enable_zerocopy_send_server": true, 00:24:36.742 "enable_zerocopy_send_client": false, 00:24:36.742 "zerocopy_threshold": 0, 00:24:36.742 "tls_version": 0, 00:24:36.742 "enable_ktls": false 00:24:36.742 } 00:24:36.742 } 00:24:36.742 ] 00:24:36.742 }, 00:24:36.742 { 00:24:36.742 "subsystem": "vmd", 00:24:36.742 "config": [] 00:24:36.742 }, 00:24:36.742 { 00:24:36.742 "subsystem": "accel", 00:24:36.742 "config": [ 00:24:36.742 { 00:24:36.742 "method": "accel_set_options", 00:24:36.742 "params": { 00:24:36.742 "small_cache_size": 128, 00:24:36.742 "large_cache_size": 16, 00:24:36.742 "task_count": 2048, 00:24:36.742 "sequence_count": 2048, 00:24:36.742 "buf_count": 2048 00:24:36.742 } 00:24:36.742 } 00:24:36.742 ] 00:24:36.742 }, 00:24:36.742 { 00:24:36.742 "subsystem": "bdev", 00:24:36.742 "config": [ 00:24:36.742 { 00:24:36.742 "method": "bdev_set_options", 00:24:36.742 "params": { 00:24:36.742 "bdev_io_pool_size": 65535, 00:24:36.742 "bdev_io_cache_size": 256, 00:24:36.742 "bdev_auto_examine": true, 00:24:36.742 "iobuf_small_cache_size": 128, 00:24:36.742 "iobuf_large_cache_size": 16 00:24:36.742 } 00:24:36.742 }, 00:24:36.743 { 00:24:36.743 "method": "bdev_raid_set_options", 00:24:36.743 "params": { 00:24:36.743 "process_window_size_kb": 1024, 00:24:36.743 "process_max_bandwidth_mb_sec": 0 00:24:36.743 } 00:24:36.743 }, 00:24:36.743 { 00:24:36.743 "method": "bdev_iscsi_set_options", 00:24:36.743 "params": { 00:24:36.743 "timeout_sec": 30 00:24:36.743 } 00:24:36.743 }, 00:24:36.743 { 00:24:36.743 "method": "bdev_nvme_set_options", 00:24:36.743 "params": { 00:24:36.743 "action_on_timeout": "none", 00:24:36.743 "timeout_us": 0, 00:24:36.743 "timeout_admin_us": 0, 00:24:36.743 "keep_alive_timeout_ms": 10000, 00:24:36.743 "arbitration_burst": 0, 00:24:36.743 "low_priority_weight": 0, 00:24:36.743 "medium_priority_weight": 0, 00:24:36.743 "high_priority_weight": 0, 00:24:36.743 "nvme_adminq_poll_period_us": 10000, 00:24:36.743 "nvme_ioq_poll_period_us": 0, 00:24:36.743 "io_queue_requests": 0, 00:24:36.743 "delay_cmd_submit": true, 00:24:36.743 "transport_retry_count": 4, 00:24:36.743 "bdev_retry_count": 3, 00:24:36.743 "transport_ack_timeout": 0, 00:24:36.743 "ctrlr_loss_timeout_sec": 0, 00:24:36.743 "reconnect_delay_sec": 0, 00:24:36.743 "fast_io_fail_timeout_sec": 0, 00:24:36.743 "disable_auto_failback": false, 00:24:36.743 "generate_uuids": false, 00:24:36.743 "transport_tos": 0, 00:24:36.743 "nvme_error_stat": false, 00:24:36.743 "rdma_srq_size": 0, 00:24:36.743 "io_path_stat": false, 00:24:36.743 "allow_accel_sequence": false, 00:24:36.743 "rdma_max_cq_size": 0, 00:24:36.743 "rdma_cm_event_timeout_ms": 0, 00:24:36.743 "dhchap_digests": [ 00:24:36.743 "sha256", 00:24:36.743 "sha384", 00:24:36.743 "sha512" 00:24:36.743 ], 00:24:36.743 "dhchap_dhgroups": [ 00:24:36.743 "null", 00:24:36.743 "ffdhe2048", 00:24:36.743 "ffdhe3072", 00:24:36.743 "ffdhe4096", 00:24:36.743 "ffdhe6144", 00:24:36.743 "ffdhe8192" 00:24:36.743 ] 00:24:36.743 } 00:24:36.743 }, 00:24:36.743 { 00:24:36.743 "method": "bdev_nvme_set_hotplug", 00:24:36.743 "params": { 00:24:36.743 "period_us": 100000, 00:24:36.743 "enable": false 00:24:36.743 } 00:24:36.743 }, 00:24:36.743 { 00:24:36.743 "method": "bdev_malloc_create", 00:24:36.743 "params": { 00:24:36.743 "name": "malloc0", 00:24:36.743 "num_blocks": 8192, 00:24:36.743 "block_size": 4096, 00:24:36.743 "physical_block_size": 4096, 00:24:36.743 "uuid": "de33a107-9999-4c27-9788-bab8deeb2b71", 00:24:36.743 "optimal_io_boundary": 0, 00:24:36.743 "md_size": 0, 00:24:36.743 "dif_type": 0, 00:24:36.743 "dif_is_head_of_md": false, 00:24:36.743 "dif_pi_format": 0 00:24:36.743 } 00:24:36.743 }, 00:24:36.743 { 00:24:36.743 "method": "bdev_wait_for_examine" 00:24:36.743 } 00:24:36.743 ] 00:24:36.743 }, 00:24:36.743 { 00:24:36.743 "subsystem": "nbd", 00:24:36.743 "config": [] 00:24:36.743 }, 00:24:36.743 { 00:24:36.743 "subsystem": "scheduler", 00:24:36.743 "config": [ 00:24:36.743 { 00:24:36.743 "method": "framework_set_scheduler", 00:24:36.743 "params": { 00:24:36.743 "name": "static" 00:24:36.743 } 00:24:36.743 } 00:24:36.743 ] 00:24:36.743 }, 00:24:36.743 { 00:24:36.743 "subsystem": "nvmf", 00:24:36.743 "config": [ 00:24:36.743 { 00:24:36.743 "method": "nvmf_set_config", 00:24:36.743 "params": { 00:24:36.743 "discovery_filter": "match_any", 00:24:36.743 "admin_cmd_passthru": { 00:24:36.743 "identify_ctrlr": false 00:24:36.743 }, 00:24:36.743 "dhchap_digests": [ 00:24:36.743 "sha256", 00:24:36.743 "sha384", 00:24:36.743 "sha512" 00:24:36.743 ], 00:24:36.743 "dhchap_dhgroups": [ 00:24:36.743 "null", 00:24:36.743 "ffdhe2048", 00:24:36.743 "ffdhe3072", 00:24:36.743 "ffdhe4096", 00:24:36.743 "ffdhe6144", 00:24:36.743 "ffdhe8192" 00:24:36.743 ] 00:24:36.743 } 00:24:36.743 }, 00:24:36.743 { 00:24:36.743 "method": "nvmf_set_max_subsystems", 00:24:36.743 "params": { 00:24:36.743 "max_subsystems": 1024 00:24:36.743 } 00:24:36.743 }, 00:24:36.743 { 00:24:36.743 "method": "nvmf_set_crdt", 00:24:36.743 "params": { 00:24:36.743 "crdt1": 0, 00:24:36.743 "crdt2": 0, 00:24:36.743 "crdt3": 0 00:24:36.743 } 00:24:36.743 }, 00:24:36.743 { 00:24:36.743 "method": "nvmf_create_transport", 00:24:36.743 "params": { 00:24:36.743 "trtype": "TCP", 00:24:36.743 "max_queue_depth": 128, 00:24:36.743 "max_io_qpairs_per_ctrlr": 127, 00:24:36.743 "in_capsule_data_size": 4096, 00:24:36.743 "max_io_size": 131072, 00:24:36.743 "io_unit_size": 131072, 00:24:36.743 "max_aq_depth": 128, 00:24:36.743 "num_shared_buffers": 511, 00:24:36.743 "buf_cache_size": 4294967295, 00:24:36.743 "dif_insert_or_strip": false, 00:24:36.743 "zcopy": false, 00:24:36.743 "c2h_success": false, 00:24:36.743 "sock_priority": 0, 00:24:36.743 "abort_timeout_sec": 1, 00:24:36.743 "ack_timeout": 0, 00:24:36.743 "data_wr_pool_size": 0 00:24:36.743 } 00:24:36.743 }, 00:24:36.743 { 00:24:36.743 "method": "nvmf_create_subsystem", 00:24:36.743 "params": { 00:24:36.743 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.743 "allow_any_host": false, 00:24:36.743 "serial_number": "00000000000000000000", 00:24:36.743 "model_number": "SPDK bdev Controller", 00:24:36.743 "max_namespaces": 32, 00:24:36.743 "min_cntlid": 1, 00:24:36.743 "max_cntlid": 65519, 00:24:36.743 "ana_reporting": false 00:24:36.743 } 00:24:36.743 }, 00:24:36.743 { 00:24:36.743 "method": "nvmf_subsystem_add_host", 00:24:36.743 "params": { 00:24:36.743 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.743 "host": "nqn.2016-06.io.spdk:host1", 00:24:36.743 "psk": "key0" 00:24:36.743 } 00:24:36.743 }, 00:24:36.743 { 00:24:36.743 "method": "nvmf_subsystem_add_ns", 00:24:36.743 "params": { 00:24:36.743 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.743 "namespace": { 00:24:36.743 "nsid": 1, 00:24:36.743 "bdev_name": "malloc0", 00:24:36.743 "nguid": "DE33A10799994C279788BAB8DEEB2B71", 00:24:36.743 "uuid": "de33a107-9999-4c27-9788-bab8deeb2b71", 00:24:36.743 "no_auto_visible": false 00:24:36.743 } 00:24:36.743 } 00:24:36.743 }, 00:24:36.743 { 00:24:36.743 "method": "nvmf_subsystem_add_listener", 00:24:36.743 "params": { 00:24:36.743 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.743 "listen_address": { 00:24:36.743 "trtype": "TCP", 00:24:36.743 "adrfam": "IPv4", 00:24:36.743 "traddr": "10.0.0.2", 00:24:36.743 "trsvcid": "4420" 00:24:36.743 }, 00:24:36.743 "secure_channel": false, 00:24:36.743 "sock_impl": "ssl" 00:24:36.743 } 00:24:36.743 } 00:24:36.743 ] 00:24:36.743 } 00:24:36.743 ] 00:24:36.743 }' 00:24:36.743 01:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:36.743 01:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:36.743 01:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.743 01:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3007738 00:24:36.743 01:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:36.743 01:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3007738 00:24:36.743 01:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3007738 ']' 00:24:36.743 01:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.743 01:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.743 01:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.743 01:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.743 01:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.002 [2024-12-06 01:02:09.524367] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:24:37.002 [2024-12-06 01:02:09.524522] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.002 [2024-12-06 01:02:09.694383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.261 [2024-12-06 01:02:09.833885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.261 [2024-12-06 01:02:09.833996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.261 [2024-12-06 01:02:09.834023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.261 [2024-12-06 01:02:09.834059] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.261 [2024-12-06 01:02:09.834078] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.261 [2024-12-06 01:02:09.835876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.831 [2024-12-06 01:02:10.399883] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.831 [2024-12-06 01:02:10.431916] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:37.831 [2024-12-06 01:02:10.432275] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.090 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.090 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:38.090 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:38.090 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:38.090 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.090 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.090 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3007894 00:24:38.090 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3007894 /var/tmp/bdevperf.sock 00:24:38.090 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3007894 ']' 00:24:38.090 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:38.090 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:38.090 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.090 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:38.090 "subsystems": [ 00:24:38.090 { 00:24:38.090 "subsystem": "keyring", 00:24:38.090 "config": [ 00:24:38.090 { 00:24:38.090 "method": "keyring_file_add_key", 00:24:38.090 "params": { 00:24:38.090 "name": "key0", 00:24:38.090 "path": "/tmp/tmp.sdwf5y7n8J" 00:24:38.090 } 00:24:38.090 } 00:24:38.090 ] 00:24:38.090 }, 00:24:38.090 { 00:24:38.090 "subsystem": "iobuf", 00:24:38.090 "config": [ 00:24:38.090 { 00:24:38.090 "method": "iobuf_set_options", 00:24:38.090 "params": { 00:24:38.090 "small_pool_count": 8192, 00:24:38.090 "large_pool_count": 1024, 00:24:38.090 "small_bufsize": 8192, 00:24:38.090 "large_bufsize": 135168, 00:24:38.090 "enable_numa": false 00:24:38.090 } 00:24:38.090 } 00:24:38.090 ] 00:24:38.090 }, 00:24:38.090 { 00:24:38.090 "subsystem": "sock", 00:24:38.090 "config": [ 00:24:38.090 { 00:24:38.090 "method": "sock_set_default_impl", 00:24:38.090 "params": { 00:24:38.090 "impl_name": "posix" 00:24:38.090 } 00:24:38.090 }, 00:24:38.090 { 00:24:38.090 "method": "sock_impl_set_options", 00:24:38.090 "params": { 00:24:38.090 "impl_name": "ssl", 00:24:38.090 "recv_buf_size": 4096, 00:24:38.090 "send_buf_size": 4096, 00:24:38.090 "enable_recv_pipe": true, 00:24:38.090 "enable_quickack": false, 00:24:38.090 "enable_placement_id": 0, 00:24:38.090 "enable_zerocopy_send_server": true, 00:24:38.090 "enable_zerocopy_send_client": false, 00:24:38.090 "zerocopy_threshold": 0, 00:24:38.090 "tls_version": 0, 00:24:38.090 "enable_ktls": false 00:24:38.090 } 00:24:38.090 }, 00:24:38.090 { 00:24:38.090 "method": "sock_impl_set_options", 00:24:38.090 "params": { 00:24:38.090 "impl_name": "posix", 00:24:38.090 "recv_buf_size": 2097152, 00:24:38.090 "send_buf_size": 2097152, 00:24:38.090 "enable_recv_pipe": true, 00:24:38.090 "enable_quickack": false, 00:24:38.090 "enable_placement_id": 0, 00:24:38.090 "enable_zerocopy_send_server": true, 00:24:38.090 "enable_zerocopy_send_client": false, 00:24:38.090 "zerocopy_threshold": 0, 00:24:38.090 "tls_version": 0, 00:24:38.090 "enable_ktls": false 00:24:38.090 } 00:24:38.090 } 00:24:38.090 ] 00:24:38.090 }, 00:24:38.090 { 00:24:38.090 "subsystem": "vmd", 00:24:38.090 "config": [] 00:24:38.090 }, 00:24:38.090 { 00:24:38.090 "subsystem": "accel", 00:24:38.090 "config": [ 00:24:38.090 { 00:24:38.090 "method": "accel_set_options", 00:24:38.090 "params": { 00:24:38.090 "small_cache_size": 128, 00:24:38.090 "large_cache_size": 16, 00:24:38.090 "task_count": 2048, 00:24:38.090 "sequence_count": 2048, 00:24:38.090 "buf_count": 2048 00:24:38.090 } 00:24:38.090 } 00:24:38.090 ] 00:24:38.090 }, 00:24:38.090 { 00:24:38.090 "subsystem": "bdev", 00:24:38.090 "config": [ 00:24:38.090 { 00:24:38.090 "method": "bdev_set_options", 00:24:38.090 "params": { 00:24:38.090 "bdev_io_pool_size": 65535, 00:24:38.090 "bdev_io_cache_size": 256, 00:24:38.090 "bdev_auto_examine": true, 00:24:38.090 "iobuf_small_cache_size": 128, 00:24:38.090 "iobuf_large_cache_size": 16 00:24:38.090 } 00:24:38.090 }, 00:24:38.090 { 00:24:38.090 "method": "bdev_raid_set_options", 00:24:38.090 "params": { 00:24:38.090 "process_window_size_kb": 1024, 00:24:38.090 "process_max_bandwidth_mb_sec": 0 00:24:38.090 } 00:24:38.090 }, 00:24:38.090 { 00:24:38.090 "method": "bdev_iscsi_set_options", 00:24:38.090 "params": { 00:24:38.090 "timeout_sec": 30 00:24:38.090 } 00:24:38.090 }, 00:24:38.090 { 00:24:38.090 "method": "bdev_nvme_set_options", 00:24:38.090 "params": { 00:24:38.090 "action_on_timeout": "none", 00:24:38.090 "timeout_us": 0, 00:24:38.090 "timeout_admin_us": 0, 00:24:38.090 "keep_alive_timeout_ms": 10000, 00:24:38.090 "arbitration_burst": 0, 00:24:38.090 "low_priority_weight": 0, 00:24:38.090 "medium_priority_weight": 0, 00:24:38.090 "high_priority_weight": 0, 00:24:38.090 "nvme_adminq_poll_period_us": 10000, 00:24:38.090 "nvme_ioq_poll_period_us": 0, 00:24:38.090 "io_queue_requests": 512, 00:24:38.090 "delay_cmd_submit": true, 00:24:38.090 "transport_retry_count": 4, 00:24:38.090 "bdev_retry_count": 3, 00:24:38.090 "transport_ack_timeout": 0, 00:24:38.090 "ctrlr_loss_timeout_sec": 0, 00:24:38.090 "reconnect_delay_sec": 0, 00:24:38.090 "fast_io_fail_timeout_sec": 0, 00:24:38.090 "disable_auto_failback": false, 00:24:38.090 "generate_uuids": false, 00:24:38.090 "transport_tos": 0, 00:24:38.090 "nvme_error_stat": false, 00:24:38.090 "rdma_srq_size": 0, 00:24:38.090 "io_path_stat": false, 00:24:38.090 "allow_accel_sequence": false, 00:24:38.090 "rdma_max_cq_size": 0, 00:24:38.090 "rdma_cm_event_timeout_ms": 0, 00:24:38.090 "dhchap_digests": [ 00:24:38.090 "sha256", 00:24:38.090 "sha384", 00:24:38.090 "sha512" 00:24:38.090 ], 00:24:38.090 "dhchap_dhgroups": [ 00:24:38.090 "null", 00:24:38.090 "ffdhe2048", 00:24:38.090 "ffdhe3072", 00:24:38.090 "ffdhe4096", 00:24:38.090 "ffdhe6144", 00:24:38.090 "ffdhe8192" 00:24:38.090 ] 00:24:38.090 } 00:24:38.090 }, 00:24:38.090 { 00:24:38.090 "method": "bdev_nvme_attach_controller", 00:24:38.090 "params": { 00:24:38.090 "name": "nvme0", 00:24:38.090 "trtype": "TCP", 00:24:38.090 "adrfam": "IPv4", 00:24:38.090 "traddr": "10.0.0.2", 00:24:38.090 "trsvcid": "4420", 00:24:38.090 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.090 "prchk_reftag": false, 00:24:38.090 "prchk_guard": false, 00:24:38.090 "ctrlr_loss_timeout_sec": 0, 00:24:38.090 "reconnect_delay_sec": 0, 00:24:38.090 "fast_io_fail_timeout_sec": 0, 00:24:38.090 "psk": "key0", 00:24:38.090 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:38.090 "hdgst": false, 00:24:38.090 "ddgst": false, 00:24:38.090 "multipath": "multipath" 00:24:38.090 } 00:24:38.090 }, 00:24:38.090 { 00:24:38.090 "method": "bdev_nvme_set_hotplug", 00:24:38.090 "params": { 00:24:38.090 "period_us": 100000, 00:24:38.090 "enable": false 00:24:38.090 } 00:24:38.090 }, 00:24:38.090 { 00:24:38.090 "method": "bdev_enable_histogram", 00:24:38.090 "params": { 00:24:38.090 "name": "nvme0n1", 00:24:38.090 "enable": true 00:24:38.090 } 00:24:38.090 }, 00:24:38.090 { 00:24:38.091 "method": "bdev_wait_for_examine" 00:24:38.091 } 00:24:38.091 ] 00:24:38.091 }, 00:24:38.091 { 00:24:38.091 "subsystem": "nbd", 00:24:38.091 "config": [] 00:24:38.091 } 00:24:38.091 ] 00:24:38.091 }' 00:24:38.091 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:38.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:38.091 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.091 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.091 [2024-12-06 01:02:10.664700] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:24:38.091 [2024-12-06 01:02:10.664875] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3007894 ] 00:24:38.348 [2024-12-06 01:02:10.820157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.348 [2024-12-06 01:02:10.957480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.916 [2024-12-06 01:02:11.410160] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:38.916 01:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.916 01:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:38.916 01:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:38.916 01:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:39.173 01:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.173 01:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:39.431 Running I/O for 1 seconds... 00:24:40.369 2622.00 IOPS, 10.24 MiB/s 00:24:40.369 Latency(us) 00:24:40.369 [2024-12-06T00:02:13.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.369 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:40.369 Verification LBA range: start 0x0 length 0x2000 00:24:40.369 nvme0n1 : 1.03 2660.27 10.39 0.00 0.00 47369.80 8932.31 41360.50 00:24:40.369 [2024-12-06T00:02:13.078Z] =================================================================================================================== 00:24:40.369 [2024-12-06T00:02:13.078Z] Total : 2660.27 10.39 0.00 0.00 47369.80 8932.31 41360.50 00:24:40.369 { 00:24:40.369 "results": [ 00:24:40.369 { 00:24:40.369 "job": "nvme0n1", 00:24:40.369 "core_mask": "0x2", 00:24:40.369 "workload": "verify", 00:24:40.369 "status": "finished", 00:24:40.369 "verify_range": { 00:24:40.369 "start": 0, 00:24:40.369 "length": 8192 00:24:40.369 }, 00:24:40.369 "queue_depth": 128, 00:24:40.369 "io_size": 4096, 00:24:40.369 "runtime": 1.034104, 00:24:40.369 "iops": 2660.2740149926894, 00:24:40.369 "mibps": 10.391695371065193, 00:24:40.369 "io_failed": 0, 00:24:40.369 "io_timeout": 0, 00:24:40.369 "avg_latency_us": 47369.80407932469, 00:24:40.369 "min_latency_us": 8932.314074074075, 00:24:40.369 "max_latency_us": 41360.497777777775 00:24:40.369 } 00:24:40.369 ], 00:24:40.369 "core_count": 1 00:24:40.369 } 00:24:40.369 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:40.369 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:40.369 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:40.369 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:40.369 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:40.369 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:40.369 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:40.369 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:40.369 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:40.369 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:40.369 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:40.369 nvmf_trace.0 00:24:40.629 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:40.629 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3007894 00:24:40.629 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3007894 ']' 00:24:40.629 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3007894 00:24:40.629 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:40.629 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:40.629 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3007894 00:24:40.629 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:40.629 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:40.629 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3007894' 00:24:40.629 killing process with pid 3007894 00:24:40.629 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3007894 00:24:40.629 Received shutdown signal, test time was about 1.000000 seconds 00:24:40.629 00:24:40.629 Latency(us) 00:24:40.629 [2024-12-06T00:02:13.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.629 [2024-12-06T00:02:13.338Z] =================================================================================================================== 00:24:40.629 [2024-12-06T00:02:13.338Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:40.629 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3007894 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:41.568 rmmod nvme_tcp 00:24:41.568 rmmod nvme_fabrics 00:24:41.568 rmmod nvme_keyring 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3007738 ']' 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3007738 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3007738 ']' 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3007738 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3007738 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3007738' 00:24:41.568 killing process with pid 3007738 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3007738 00:24:41.568 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3007738 00:24:43.034 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:43.034 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:43.034 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:43.034 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:43.034 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:43.034 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:43.034 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:43.034 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:43.034 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:43.034 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.034 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.034 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.940 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:44.940 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.PqvDw19XYz /tmp/tmp.72KDbxi73C /tmp/tmp.sdwf5y7n8J 00:24:44.940 00:24:44.940 real 1m54.288s 00:24:44.940 user 3m8.855s 00:24:44.940 sys 0m27.685s 00:24:44.940 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:44.940 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:44.940 ************************************ 00:24:44.940 END TEST nvmf_tls 00:24:44.940 ************************************ 00:24:44.940 01:02:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:44.940 01:02:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:44.940 01:02:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:44.940 01:02:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:44.940 ************************************ 00:24:44.940 START TEST nvmf_fips 00:24:44.940 ************************************ 00:24:44.940 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:45.200 * Looking for test storage... 00:24:45.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:45.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.200 --rc genhtml_branch_coverage=1 00:24:45.200 --rc genhtml_function_coverage=1 00:24:45.200 --rc genhtml_legend=1 00:24:45.200 --rc geninfo_all_blocks=1 00:24:45.200 --rc geninfo_unexecuted_blocks=1 00:24:45.200 00:24:45.200 ' 00:24:45.200 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:45.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.200 --rc genhtml_branch_coverage=1 00:24:45.200 --rc genhtml_function_coverage=1 00:24:45.200 --rc genhtml_legend=1 00:24:45.200 --rc geninfo_all_blocks=1 00:24:45.201 --rc geninfo_unexecuted_blocks=1 00:24:45.201 00:24:45.201 ' 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:45.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.201 --rc genhtml_branch_coverage=1 00:24:45.201 --rc genhtml_function_coverage=1 00:24:45.201 --rc genhtml_legend=1 00:24:45.201 --rc geninfo_all_blocks=1 00:24:45.201 --rc geninfo_unexecuted_blocks=1 00:24:45.201 00:24:45.201 ' 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:45.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.201 --rc genhtml_branch_coverage=1 00:24:45.201 --rc genhtml_function_coverage=1 00:24:45.201 --rc genhtml_legend=1 00:24:45.201 --rc geninfo_all_blocks=1 00:24:45.201 --rc geninfo_unexecuted_blocks=1 00:24:45.201 00:24:45.201 ' 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:45.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:45.201 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:45.202 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:45.460 Error setting digest 00:24:45.460 4002E27AD77F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:45.460 4002E27AD77F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:45.460 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:45.460 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:45.460 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:45.460 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:45.460 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:45.460 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:45.460 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.460 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:45.460 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:45.460 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:45.460 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.460 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.460 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.460 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:45.460 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:45.460 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:45.460 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:47.366 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:47.366 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:47.366 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:47.366 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:47.366 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:47.366 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:47.366 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:47.366 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:47.366 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.366 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:47.366 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:47.366 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:47.366 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:47.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:24:47.366 00:24:47.366 --- 10.0.0.2 ping statistics --- 00:24:47.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.367 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:24:47.367 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:47.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:24:47.367 00:24:47.367 --- 10.0.0.1 ping statistics --- 00:24:47.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.367 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:24:47.367 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.367 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:47.367 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:47.367 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.367 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:47.367 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:47.367 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.367 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:47.367 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:47.625 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:47.625 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:47.625 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:47.625 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:47.625 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3010516 00:24:47.625 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:47.625 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3010516 00:24:47.625 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3010516 ']' 00:24:47.625 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.625 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.625 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.625 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.625 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:47.625 [2024-12-06 01:02:20.229309] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:24:47.625 [2024-12-06 01:02:20.229477] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.884 [2024-12-06 01:02:20.379059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.884 [2024-12-06 01:02:20.504359] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.884 [2024-12-06 01:02:20.504448] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.884 [2024-12-06 01:02:20.504469] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.884 [2024-12-06 01:02:20.504491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.884 [2024-12-06 01:02:20.504508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.884 [2024-12-06 01:02:20.506013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.824 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:48.824 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:48.824 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:48.824 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:48.824 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:48.824 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.824 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:48.824 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:48.824 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:48.824 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.VdV 00:24:48.824 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:48.824 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.VdV 00:24:48.824 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.VdV 00:24:48.824 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.VdV 00:24:48.824 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:48.824 [2024-12-06 01:02:21.511233] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.824 [2024-12-06 01:02:21.527152] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:48.824 [2024-12-06 01:02:21.527546] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.083 malloc0 00:24:49.083 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:49.083 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3010680 00:24:49.083 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:49.083 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3010680 /var/tmp/bdevperf.sock 00:24:49.083 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3010680 ']' 00:24:49.083 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:49.083 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:49.083 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:49.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:49.083 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:49.083 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:49.083 [2024-12-06 01:02:21.741521] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:24:49.083 [2024-12-06 01:02:21.741678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3010680 ] 00:24:49.343 [2024-12-06 01:02:21.876481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.343 [2024-12-06 01:02:22.001541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.281 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:50.281 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:50.281 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.VdV 00:24:50.281 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:50.540 [2024-12-06 01:02:23.176645] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:50.799 TLSTESTn1 00:24:50.799 01:02:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:50.799 Running I/O for 10 seconds... 00:24:53.112 2337.00 IOPS, 9.13 MiB/s [2024-12-06T00:02:26.752Z] 2420.50 IOPS, 9.46 MiB/s [2024-12-06T00:02:27.687Z] 2455.67 IOPS, 9.59 MiB/s [2024-12-06T00:02:28.621Z] 2475.50 IOPS, 9.67 MiB/s [2024-12-06T00:02:29.562Z] 2483.60 IOPS, 9.70 MiB/s [2024-12-06T00:02:30.501Z] 2488.67 IOPS, 9.72 MiB/s [2024-12-06T00:02:31.439Z] 2498.00 IOPS, 9.76 MiB/s [2024-12-06T00:02:32.815Z] 2503.12 IOPS, 9.78 MiB/s [2024-12-06T00:02:33.755Z] 2507.33 IOPS, 9.79 MiB/s [2024-12-06T00:02:33.755Z] 2509.80 IOPS, 9.80 MiB/s 00:25:01.046 Latency(us) 00:25:01.046 [2024-12-06T00:02:33.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.046 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:01.046 Verification LBA range: start 0x0 length 0x2000 00:25:01.046 TLSTESTn1 : 10.02 2516.65 9.83 0.00 0.00 50771.43 9126.49 49127.73 00:25:01.046 [2024-12-06T00:02:33.755Z] =================================================================================================================== 00:25:01.046 [2024-12-06T00:02:33.755Z] Total : 2516.65 9.83 0.00 0.00 50771.43 9126.49 49127.73 00:25:01.046 { 00:25:01.046 "results": [ 00:25:01.046 { 00:25:01.046 "job": "TLSTESTn1", 00:25:01.046 "core_mask": "0x4", 00:25:01.046 "workload": "verify", 00:25:01.046 "status": "finished", 00:25:01.046 "verify_range": { 00:25:01.046 "start": 0, 00:25:01.046 "length": 8192 00:25:01.046 }, 00:25:01.046 "queue_depth": 128, 00:25:01.046 "io_size": 4096, 00:25:01.046 "runtime": 10.022863, 00:25:01.046 "iops": 2516.646191811661, 00:25:01.046 "mibps": 9.830649186764301, 00:25:01.046 "io_failed": 0, 00:25:01.046 "io_timeout": 0, 00:25:01.046 "avg_latency_us": 50771.429579354175, 00:25:01.046 "min_latency_us": 9126.494814814814, 00:25:01.046 "max_latency_us": 49127.72740740741 00:25:01.046 } 00:25:01.046 ], 00:25:01.046 "core_count": 1 00:25:01.046 } 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:01.046 nvmf_trace.0 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3010680 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3010680 ']' 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3010680 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3010680 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3010680' 00:25:01.046 killing process with pid 3010680 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3010680 00:25:01.046 Received shutdown signal, test time was about 10.000000 seconds 00:25:01.046 00:25:01.046 Latency(us) 00:25:01.046 [2024-12-06T00:02:33.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.046 [2024-12-06T00:02:33.755Z] =================================================================================================================== 00:25:01.046 [2024-12-06T00:02:33.755Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:01.046 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3010680 00:25:01.985 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:01.985 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:01.985 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:01.985 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:01.985 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:01.985 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:01.985 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:01.985 rmmod nvme_tcp 00:25:01.985 rmmod nvme_fabrics 00:25:01.985 rmmod nvme_keyring 00:25:01.985 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:01.985 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:01.985 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:01.985 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3010516 ']' 00:25:01.985 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3010516 00:25:01.985 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3010516 ']' 00:25:01.985 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3010516 00:25:01.985 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:25:01.985 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.986 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3010516 00:25:01.986 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:01.986 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:01.986 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3010516' 00:25:01.986 killing process with pid 3010516 00:25:01.986 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3010516 00:25:01.986 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3010516 00:25:03.361 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:03.361 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:03.361 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:03.361 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:03.361 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:25:03.361 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:03.361 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:25:03.361 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:03.361 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:03.361 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.361 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.361 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.267 01:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:05.267 01:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.VdV 00:25:05.267 00:25:05.267 real 0m20.317s 00:25:05.267 user 0m24.881s 00:25:05.267 sys 0m6.561s 00:25:05.267 01:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.268 01:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:05.268 ************************************ 00:25:05.268 END TEST nvmf_fips 00:25:05.268 ************************************ 00:25:05.268 01:02:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:05.268 01:02:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:05.268 01:02:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:05.268 01:02:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:05.268 ************************************ 00:25:05.268 START TEST nvmf_control_msg_list 00:25:05.268 ************************************ 00:25:05.268 01:02:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:05.527 * Looking for test storage... 00:25:05.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:05.527 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:05.527 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:25:05.527 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:05.527 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:05.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.528 --rc genhtml_branch_coverage=1 00:25:05.528 --rc genhtml_function_coverage=1 00:25:05.528 --rc genhtml_legend=1 00:25:05.528 --rc geninfo_all_blocks=1 00:25:05.528 --rc geninfo_unexecuted_blocks=1 00:25:05.528 00:25:05.528 ' 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:05.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.528 --rc genhtml_branch_coverage=1 00:25:05.528 --rc genhtml_function_coverage=1 00:25:05.528 --rc genhtml_legend=1 00:25:05.528 --rc geninfo_all_blocks=1 00:25:05.528 --rc geninfo_unexecuted_blocks=1 00:25:05.528 00:25:05.528 ' 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:05.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.528 --rc genhtml_branch_coverage=1 00:25:05.528 --rc genhtml_function_coverage=1 00:25:05.528 --rc genhtml_legend=1 00:25:05.528 --rc geninfo_all_blocks=1 00:25:05.528 --rc geninfo_unexecuted_blocks=1 00:25:05.528 00:25:05.528 ' 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:05.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.528 --rc genhtml_branch_coverage=1 00:25:05.528 --rc genhtml_function_coverage=1 00:25:05.528 --rc genhtml_legend=1 00:25:05.528 --rc geninfo_all_blocks=1 00:25:05.528 --rc geninfo_unexecuted_blocks=1 00:25:05.528 00:25:05.528 ' 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:05.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:05.528 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:05.529 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.529 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.529 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.529 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:05.529 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:05.529 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:05.529 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:07.436 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.436 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:07.436 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:07.437 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:07.437 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.437 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:07.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:25:07.696 00:25:07.696 --- 10.0.0.2 ping statistics --- 00:25:07.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.696 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:07.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:25:07.696 00:25:07.696 --- 10.0.0.1 ping statistics --- 00:25:07.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.696 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3014204 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3014204 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3014204 ']' 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:07.696 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.954 [2024-12-06 01:02:40.408649] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:25:07.954 [2024-12-06 01:02:40.408822] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.954 [2024-12-06 01:02:40.577997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.213 [2024-12-06 01:02:40.718695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.213 [2024-12-06 01:02:40.718774] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.213 [2024-12-06 01:02:40.718800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.213 [2024-12-06 01:02:40.718833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.213 [2024-12-06 01:02:40.718854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.213 [2024-12-06 01:02:40.720526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.779 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.779 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:25:08.779 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:08.779 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:08.779 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:08.779 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.779 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:08.779 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:08.779 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:08.779 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.779 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:09.037 [2024-12-06 01:02:41.488205] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.037 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.037 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:09.037 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.037 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:09.037 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.037 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:09.037 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.037 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:09.037 Malloc0 00:25:09.037 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.037 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:09.037 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.037 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:09.037 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.037 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:09.037 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.037 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:09.037 [2024-12-06 01:02:41.559101] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.037 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.037 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3014359 00:25:09.037 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:09.037 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3014360 00:25:09.038 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:09.038 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3014361 00:25:09.038 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:09.038 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3014359 00:25:09.038 [2024-12-06 01:02:41.668288] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:09.038 [2024-12-06 01:02:41.678547] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:09.038 [2024-12-06 01:02:41.678882] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:10.412 Initializing NVMe Controllers 00:25:10.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:10.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:10.412 Initialization complete. Launching workers. 00:25:10.412 ======================================================== 00:25:10.412 Latency(us) 00:25:10.412 Device Information : IOPS MiB/s Average min max 00:25:10.412 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2429.00 9.49 411.13 224.76 13538.26 00:25:10.412 ======================================================== 00:25:10.412 Total : 2429.00 9.49 411.13 224.76 13538.26 00:25:10.412 00:25:10.412 Initializing NVMe Controllers 00:25:10.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:10.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:10.412 Initialization complete. Launching workers. 00:25:10.412 ======================================================== 00:25:10.412 Latency(us) 00:25:10.412 Device Information : IOPS MiB/s Average min max 00:25:10.412 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2405.94 9.40 415.12 279.72 13619.72 00:25:10.412 ======================================================== 00:25:10.412 Total : 2405.94 9.40 415.12 279.72 13619.72 00:25:10.412 00:25:10.412 Initializing NVMe Controllers 00:25:10.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:10.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:10.412 Initialization complete. Launching workers. 00:25:10.412 ======================================================== 00:25:10.412 Latency(us) 00:25:10.412 Device Information : IOPS MiB/s Average min max 00:25:10.412 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2478.96 9.68 402.83 215.38 811.58 00:25:10.412 ======================================================== 00:25:10.412 Total : 2478.96 9.68 402.83 215.38 811.58 00:25:10.412 00:25:10.412 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3014360 00:25:10.412 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3014361 00:25:10.412 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:10.412 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:10.412 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:10.412 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:10.412 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:10.412 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:10.412 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:10.412 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:10.412 rmmod nvme_tcp 00:25:10.412 rmmod nvme_fabrics 00:25:10.412 rmmod nvme_keyring 00:25:10.412 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:10.412 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:10.412 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:10.412 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3014204 ']' 00:25:10.412 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3014204 00:25:10.413 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3014204 ']' 00:25:10.413 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3014204 00:25:10.413 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:25:10.413 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:10.413 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3014204 00:25:10.413 01:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:10.413 01:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:10.413 01:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3014204' 00:25:10.413 killing process with pid 3014204 00:25:10.413 01:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3014204 00:25:10.413 01:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3014204 00:25:11.852 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:11.852 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:11.852 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:11.852 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:11.852 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:25:11.852 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:11.852 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:25:11.852 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:11.852 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:11.852 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.852 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.852 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.753 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:13.753 00:25:13.753 real 0m8.362s 00:25:13.753 user 0m7.991s 00:25:13.753 sys 0m2.966s 00:25:13.753 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:13.753 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:13.753 ************************************ 00:25:13.753 END TEST nvmf_control_msg_list 00:25:13.753 ************************************ 00:25:13.753 01:02:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:13.753 01:02:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:13.753 01:02:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:13.753 01:02:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:13.753 ************************************ 00:25:13.753 START TEST nvmf_wait_for_buf 00:25:13.753 ************************************ 00:25:13.753 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:13.753 * Looking for test storage... 00:25:13.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:13.753 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:13.753 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:25:13.753 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:14.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.013 --rc genhtml_branch_coverage=1 00:25:14.013 --rc genhtml_function_coverage=1 00:25:14.013 --rc genhtml_legend=1 00:25:14.013 --rc geninfo_all_blocks=1 00:25:14.013 --rc geninfo_unexecuted_blocks=1 00:25:14.013 00:25:14.013 ' 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:14.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.013 --rc genhtml_branch_coverage=1 00:25:14.013 --rc genhtml_function_coverage=1 00:25:14.013 --rc genhtml_legend=1 00:25:14.013 --rc geninfo_all_blocks=1 00:25:14.013 --rc geninfo_unexecuted_blocks=1 00:25:14.013 00:25:14.013 ' 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:14.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.013 --rc genhtml_branch_coverage=1 00:25:14.013 --rc genhtml_function_coverage=1 00:25:14.013 --rc genhtml_legend=1 00:25:14.013 --rc geninfo_all_blocks=1 00:25:14.013 --rc geninfo_unexecuted_blocks=1 00:25:14.013 00:25:14.013 ' 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:14.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.013 --rc genhtml_branch_coverage=1 00:25:14.013 --rc genhtml_function_coverage=1 00:25:14.013 --rc genhtml_legend=1 00:25:14.013 --rc geninfo_all_blocks=1 00:25:14.013 --rc geninfo_unexecuted_blocks=1 00:25:14.013 00:25:14.013 ' 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:14.013 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:14.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:14.014 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:15.918 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:15.918 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:15.918 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:15.918 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:15.918 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:15.918 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:15.918 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:15.918 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:15.918 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:15.918 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:15.918 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:15.918 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:15.918 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:15.918 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:15.918 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:15.918 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.918 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.918 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.918 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.918 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.918 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:15.919 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:15.919 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:15.919 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:15.919 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:15.919 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:16.178 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:16.178 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:16.178 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:16.178 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:16.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:25:16.178 00:25:16.178 --- 10.0.0.2 ping statistics --- 00:25:16.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.178 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:25:16.178 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:16.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:25:16.178 00:25:16.178 --- 10.0.0.1 ping statistics --- 00:25:16.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.178 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:25:16.178 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.178 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:16.178 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:16.178 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:16.178 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:16.178 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:16.178 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:16.179 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:16.179 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:16.179 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:16.179 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:16.179 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:16.179 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:16.179 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3016687 00:25:16.179 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:16.179 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3016687 00:25:16.179 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3016687 ']' 00:25:16.179 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.179 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.179 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.179 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.179 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:16.179 [2024-12-06 01:02:48.859523] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:25:16.179 [2024-12-06 01:02:48.859668] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.437 [2024-12-06 01:02:49.011008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.696 [2024-12-06 01:02:49.148267] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.696 [2024-12-06 01:02:49.148359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.696 [2024-12-06 01:02:49.148385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:16.696 [2024-12-06 01:02:49.148409] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:16.696 [2024-12-06 01:02:49.148429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.696 [2024-12-06 01:02:49.150099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.263 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:17.263 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:25:17.263 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:17.263 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:17.263 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.263 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.263 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:17.263 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:17.263 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:17.263 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.263 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.263 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.263 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:17.263 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.263 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.263 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.263 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:17.263 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.263 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.523 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.523 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:17.523 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.523 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.782 Malloc0 00:25:17.782 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.782 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:17.782 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.782 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.782 [2024-12-06 01:02:50.251009] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.782 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.782 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:17.782 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.782 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.782 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.782 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:17.782 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.782 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.782 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.782 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:17.782 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.782 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.782 [2024-12-06 01:02:50.275317] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.782 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.782 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:17.782 [2024-12-06 01:02:50.433094] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:19.152 Initializing NVMe Controllers 00:25:19.152 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:19.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:19.152 Initialization complete. Launching workers. 00:25:19.152 ======================================================== 00:25:19.152 Latency(us) 00:25:19.152 Device Information : IOPS MiB/s Average min max 00:25:19.152 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 119.00 14.88 34961.62 7904.34 71843.12 00:25:19.152 ======================================================== 00:25:19.152 Total : 119.00 14.88 34961.62 7904.34 71843.12 00:25:19.152 00:25:19.409 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:19.409 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:19.409 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.409 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:19.409 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.409 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1878 00:25:19.409 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1878 -eq 0 ]] 00:25:19.409 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:19.409 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:19.409 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:19.409 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:19.409 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:19.409 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:19.409 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:19.409 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:19.409 rmmod nvme_tcp 00:25:19.409 rmmod nvme_fabrics 00:25:19.409 rmmod nvme_keyring 00:25:19.409 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:19.409 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:19.409 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:19.409 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3016687 ']' 00:25:19.409 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3016687 00:25:19.409 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3016687 ']' 00:25:19.409 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3016687 00:25:19.409 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:25:19.409 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:19.409 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3016687 00:25:19.409 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:19.409 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:19.409 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3016687' 00:25:19.409 killing process with pid 3016687 00:25:19.409 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3016687 00:25:19.409 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3016687 00:25:20.784 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:20.784 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:20.784 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:20.784 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:20.784 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:20.784 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:20.784 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:20.784 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:20.784 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:20.784 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.784 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.784 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.683 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:22.683 00:25:22.683 real 0m8.854s 00:25:22.683 user 0m5.395s 00:25:22.683 sys 0m2.199s 00:25:22.683 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:22.683 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:22.683 ************************************ 00:25:22.683 END TEST nvmf_wait_for_buf 00:25:22.683 ************************************ 00:25:22.683 01:02:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:22.683 01:02:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:22.683 01:02:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:22.683 01:02:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:22.683 01:02:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:22.683 ************************************ 00:25:22.683 START TEST nvmf_fuzz 00:25:22.683 ************************************ 00:25:22.683 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:22.683 * Looking for test storage... 00:25:22.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:22.683 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:22.683 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:25:22.683 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:22.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.940 --rc genhtml_branch_coverage=1 00:25:22.940 --rc genhtml_function_coverage=1 00:25:22.940 --rc genhtml_legend=1 00:25:22.940 --rc geninfo_all_blocks=1 00:25:22.940 --rc geninfo_unexecuted_blocks=1 00:25:22.940 00:25:22.940 ' 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:22.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.940 --rc genhtml_branch_coverage=1 00:25:22.940 --rc genhtml_function_coverage=1 00:25:22.940 --rc genhtml_legend=1 00:25:22.940 --rc geninfo_all_blocks=1 00:25:22.940 --rc geninfo_unexecuted_blocks=1 00:25:22.940 00:25:22.940 ' 00:25:22.940 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:22.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.940 --rc genhtml_branch_coverage=1 00:25:22.940 --rc genhtml_function_coverage=1 00:25:22.940 --rc genhtml_legend=1 00:25:22.940 --rc geninfo_all_blocks=1 00:25:22.940 --rc geninfo_unexecuted_blocks=1 00:25:22.940 00:25:22.941 ' 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:22.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.941 --rc genhtml_branch_coverage=1 00:25:22.941 --rc genhtml_function_coverage=1 00:25:22.941 --rc genhtml_legend=1 00:25:22.941 --rc geninfo_all_blocks=1 00:25:22.941 --rc geninfo_unexecuted_blocks=1 00:25:22.941 00:25:22.941 ' 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:22.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:22.941 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:24.839 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:24.839 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:24.839 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:24.839 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:24.840 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:24.840 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:25.099 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:25.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:25.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:25:25.099 00:25:25.099 --- 10.0.0.2 ping statistics --- 00:25:25.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.099 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:25:25.099 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:25.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:25.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:25:25.099 00:25:25.099 --- 10.0.0.1 ping statistics --- 00:25:25.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.099 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:25:25.099 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:25.099 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:25:25.099 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:25.099 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:25.099 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:25.099 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:25.099 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:25.099 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:25.099 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:25.099 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3019054 00:25:25.099 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:25.099 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:25.099 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3019054 00:25:25.099 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3019054 ']' 00:25:25.099 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.099 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:25.099 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.099 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:25.099 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:26.034 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:26.034 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:25:26.034 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:26.034 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.034 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:26.034 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.034 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:26.034 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.034 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:26.292 Malloc0 00:25:26.292 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.292 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:26.292 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.292 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:26.292 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.292 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:26.292 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.292 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:26.292 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.292 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:26.292 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.292 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:26.292 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.292 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:26.292 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:58.351 Fuzzing completed. Shutting down the fuzz application 00:25:58.351 00:25:58.351 Dumping successful admin opcodes: 00:25:58.351 9, 10, 00:25:58.351 Dumping successful io opcodes: 00:25:58.351 0, 9, 00:25:58.351 NS: 0x2000008efec0 I/O qp, Total commands completed: 323469, total successful commands: 1907, random_seed: 3961754496 00:25:58.351 NS: 0x2000008efec0 admin qp, Total commands completed: 40704, total successful commands: 10, random_seed: 1448629952 00:25:58.351 01:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:58.917 Fuzzing completed. Shutting down the fuzz application 00:25:58.917 00:25:58.917 Dumping successful admin opcodes: 00:25:58.917 00:25:58.917 Dumping successful io opcodes: 00:25:58.917 00:25:58.917 NS: 0x2000008efec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1148550874 00:25:58.917 NS: 0x2000008efec0 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 1148755398 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:58.917 rmmod nvme_tcp 00:25:58.917 rmmod nvme_fabrics 00:25:58.917 rmmod nvme_keyring 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 3019054 ']' 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 3019054 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3019054 ']' 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 3019054 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3019054 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3019054' 00:25:58.917 killing process with pid 3019054 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 3019054 00:25:58.917 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 3019054 00:26:00.293 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:00.293 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:00.293 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:00.293 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:26:00.293 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:26:00.293 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:00.293 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:26:00.293 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:00.293 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:00.293 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.293 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:00.293 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.195 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:02.195 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:02.195 00:26:02.195 real 0m39.548s 00:26:02.195 user 0m57.224s 00:26:02.195 sys 0m13.086s 00:26:02.195 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:02.195 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:02.195 ************************************ 00:26:02.195 END TEST nvmf_fuzz 00:26:02.195 ************************************ 00:26:02.195 01:03:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:02.195 01:03:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:02.195 01:03:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:02.195 01:03:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:02.195 ************************************ 00:26:02.195 START TEST nvmf_multiconnection 00:26:02.195 ************************************ 00:26:02.195 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:02.453 * Looking for test storage... 00:26:02.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:02.453 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:02.453 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:26:02.453 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:02.453 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:02.453 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:02.453 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:02.453 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:02.453 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:26:02.453 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:26:02.453 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:26:02.453 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:26:02.453 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:26:02.453 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:26:02.453 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:26:02.453 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:02.453 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:26:02.453 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:26:02.453 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:02.453 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:02.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.454 --rc genhtml_branch_coverage=1 00:26:02.454 --rc genhtml_function_coverage=1 00:26:02.454 --rc genhtml_legend=1 00:26:02.454 --rc geninfo_all_blocks=1 00:26:02.454 --rc geninfo_unexecuted_blocks=1 00:26:02.454 00:26:02.454 ' 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:02.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.454 --rc genhtml_branch_coverage=1 00:26:02.454 --rc genhtml_function_coverage=1 00:26:02.454 --rc genhtml_legend=1 00:26:02.454 --rc geninfo_all_blocks=1 00:26:02.454 --rc geninfo_unexecuted_blocks=1 00:26:02.454 00:26:02.454 ' 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:02.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.454 --rc genhtml_branch_coverage=1 00:26:02.454 --rc genhtml_function_coverage=1 00:26:02.454 --rc genhtml_legend=1 00:26:02.454 --rc geninfo_all_blocks=1 00:26:02.454 --rc geninfo_unexecuted_blocks=1 00:26:02.454 00:26:02.454 ' 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:02.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.454 --rc genhtml_branch_coverage=1 00:26:02.454 --rc genhtml_function_coverage=1 00:26:02.454 --rc genhtml_legend=1 00:26:02.454 --rc geninfo_all_blocks=1 00:26:02.454 --rc geninfo_unexecuted_blocks=1 00:26:02.454 00:26:02.454 ' 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:02.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:26:02.454 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:04.354 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:04.354 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:04.354 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:04.354 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:04.354 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:04.354 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:04.355 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:04.355 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:04.355 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:04.355 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:04.355 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:04.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:04.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:26:04.614 00:26:04.614 --- 10.0.0.2 ping statistics --- 00:26:04.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.614 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:04.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:04.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:26:04.614 00:26:04.614 --- 10.0.0.1 ping statistics --- 00:26:04.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.614 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=3025662 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 3025662 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 3025662 ']' 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:04.614 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.614 [2024-12-06 01:03:37.253008] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:26:04.614 [2024-12-06 01:03:37.253173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:04.873 [2024-12-06 01:03:37.401027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:04.873 [2024-12-06 01:03:37.539778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:04.873 [2024-12-06 01:03:37.539869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:04.873 [2024-12-06 01:03:37.539896] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:04.873 [2024-12-06 01:03:37.539921] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:04.873 [2024-12-06 01:03:37.539940] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:04.873 [2024-12-06 01:03:37.542684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.873 [2024-12-06 01:03:37.542755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:04.873 [2024-12-06 01:03:37.542856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.873 [2024-12-06 01:03:37.542861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:05.807 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:05.807 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:26:05.807 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:05.807 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:05.807 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.807 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:05.807 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:05.807 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.807 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.807 [2024-12-06 01:03:38.258170] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:05.807 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.807 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:05.807 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.807 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:05.807 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.808 Malloc1 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.808 [2024-12-06 01:03:38.372694] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.808 Malloc2 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.808 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.066 Malloc3 00:26:06.066 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.066 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:06.066 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.067 Malloc4 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.067 Malloc5 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.067 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.326 Malloc6 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.326 Malloc7 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.326 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.585 Malloc8 00:26:06.585 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.585 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:06.585 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.585 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.585 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.585 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:06.585 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.585 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.585 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.585 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:06.585 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.585 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.585 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.585 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.585 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:06.585 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.585 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.585 Malloc9 00:26:06.585 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.585 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:06.585 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.585 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.586 Malloc10 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.586 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.847 Malloc11 00:26:06.847 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.847 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:06.847 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.847 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.847 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.847 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:06.847 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.847 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.847 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.847 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:06.847 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.847 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.847 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.847 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:06.847 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.847 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:07.452 01:03:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:07.452 01:03:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:07.452 01:03:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:07.452 01:03:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:07.452 01:03:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:09.994 01:03:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:09.994 01:03:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:09.994 01:03:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:26:09.994 01:03:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:09.994 01:03:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:09.994 01:03:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:09.994 01:03:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.994 01:03:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:09.994 01:03:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:09.994 01:03:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:09.995 01:03:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:09.995 01:03:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:09.995 01:03:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:12.523 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:12.523 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:12.523 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:26:12.523 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:12.523 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:12.523 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:12.523 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.523 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:12.780 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:12.780 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:12.780 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:12.780 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:12.780 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:15.304 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:15.304 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:15.304 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:26:15.304 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:15.304 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:15.304 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:15.304 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.304 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:15.561 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:15.561 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:15.561 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:15.561 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:15.561 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:17.453 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:17.453 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:17.453 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:26:17.453 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:17.453 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:17.453 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:17.453 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.453 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:18.385 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:18.385 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:18.385 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:18.385 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:18.385 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:20.284 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:20.284 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:20.284 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:26:20.284 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:20.284 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:20.284 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:20.284 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:20.284 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:21.219 01:03:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:21.219 01:03:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:21.219 01:03:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:21.219 01:03:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:21.219 01:03:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:23.114 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:23.114 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:23.114 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:26:23.370 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:23.370 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:23.370 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:23.370 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.370 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:24.301 01:03:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:24.301 01:03:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:24.301 01:03:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:24.301 01:03:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:24.301 01:03:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:26.196 01:03:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:26.196 01:03:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:26.196 01:03:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:26:26.196 01:03:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:26.196 01:03:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:26.196 01:03:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:26.196 01:03:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:26.196 01:03:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:27.127 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:27.127 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:27.127 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:27.127 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:27.127 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:29.652 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:29.652 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:29.652 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:26:29.652 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:29.652 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:29.652 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:29.652 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.652 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:30.220 01:04:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:30.220 01:04:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:30.220 01:04:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:30.220 01:04:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:30.220 01:04:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:32.119 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:32.119 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:32.119 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:26:32.119 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:32.119 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:32.119 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:32.119 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:32.119 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:33.489 01:04:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:33.489 01:04:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:33.489 01:04:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:33.489 01:04:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:33.489 01:04:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:35.381 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:35.381 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:35.381 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:35.381 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:35.381 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:35.382 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:35.382 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:35.382 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:36.311 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:36.311 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:36.311 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:36.311 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:36.311 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:38.209 01:04:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:38.209 01:04:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:38.209 01:04:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:38.209 01:04:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:38.209 01:04:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:38.209 01:04:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:38.209 01:04:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:38.209 [global] 00:26:38.209 thread=1 00:26:38.209 invalidate=1 00:26:38.209 rw=read 00:26:38.209 time_based=1 00:26:38.209 runtime=10 00:26:38.209 ioengine=libaio 00:26:38.209 direct=1 00:26:38.209 bs=262144 00:26:38.209 iodepth=64 00:26:38.209 norandommap=1 00:26:38.209 numjobs=1 00:26:38.209 00:26:38.209 [job0] 00:26:38.209 filename=/dev/nvme0n1 00:26:38.209 [job1] 00:26:38.209 filename=/dev/nvme10n1 00:26:38.209 [job2] 00:26:38.209 filename=/dev/nvme1n1 00:26:38.209 [job3] 00:26:38.209 filename=/dev/nvme2n1 00:26:38.209 [job4] 00:26:38.209 filename=/dev/nvme3n1 00:26:38.209 [job5] 00:26:38.209 filename=/dev/nvme4n1 00:26:38.209 [job6] 00:26:38.209 filename=/dev/nvme5n1 00:26:38.209 [job7] 00:26:38.209 filename=/dev/nvme6n1 00:26:38.209 [job8] 00:26:38.209 filename=/dev/nvme7n1 00:26:38.209 [job9] 00:26:38.209 filename=/dev/nvme8n1 00:26:38.209 [job10] 00:26:38.209 filename=/dev/nvme9n1 00:26:38.209 Could not set queue depth (nvme0n1) 00:26:38.209 Could not set queue depth (nvme10n1) 00:26:38.209 Could not set queue depth (nvme1n1) 00:26:38.209 Could not set queue depth (nvme2n1) 00:26:38.209 Could not set queue depth (nvme3n1) 00:26:38.209 Could not set queue depth (nvme4n1) 00:26:38.209 Could not set queue depth (nvme5n1) 00:26:38.209 Could not set queue depth (nvme6n1) 00:26:38.209 Could not set queue depth (nvme7n1) 00:26:38.209 Could not set queue depth (nvme8n1) 00:26:38.209 Could not set queue depth (nvme9n1) 00:26:38.467 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.467 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.467 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.467 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.467 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.467 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.467 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.468 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.468 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.468 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.468 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.468 fio-3.35 00:26:38.468 Starting 11 threads 00:26:50.749 00:26:50.749 job0: (groupid=0, jobs=1): err= 0: pid=3030198: Fri Dec 6 01:04:21 2024 00:26:50.749 read: IOPS=154, BW=38.7MiB/s (40.6MB/s)(401MiB/10348msec) 00:26:50.749 slat (usec): min=9, max=342249, avg=4764.66, stdev=24990.89 00:26:50.749 clat (msec): min=24, max=1473, avg=408.20, stdev=272.50 00:26:50.749 lat (msec): min=24, max=1473, avg=412.96, stdev=277.04 00:26:50.749 clat percentiles (msec): 00:26:50.749 | 1.00th=[ 49], 5.00th=[ 101], 10.00th=[ 124], 20.00th=[ 153], 00:26:50.749 | 30.00th=[ 251], 40.00th=[ 305], 50.00th=[ 342], 60.00th=[ 401], 00:26:50.749 | 70.00th=[ 481], 80.00th=[ 567], 90.00th=[ 835], 95.00th=[ 969], 00:26:50.749 | 99.00th=[ 1099], 99.50th=[ 1301], 99.90th=[ 1469], 99.95th=[ 1469], 00:26:50.749 | 99.99th=[ 1469] 00:26:50.749 bw ( KiB/s): min=10752, max=94530, per=4.20%, avg=39356.75, stdev=24636.45, samples=20 00:26:50.749 iops : min= 42, max= 369, avg=153.70, stdev=96.18, samples=20 00:26:50.749 lat (msec) : 50=1.25%, 100=3.18%, 250=25.47%, 500=41.89%, 750=13.67% 00:26:50.749 lat (msec) : 1000=9.80%, 2000=4.74% 00:26:50.749 cpu : usr=0.07%, sys=0.46%, ctx=240, majf=0, minf=4097 00:26:50.749 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:26:50.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.749 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.749 issued rwts: total=1602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.749 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.749 job1: (groupid=0, jobs=1): err= 0: pid=3030204: Fri Dec 6 01:04:21 2024 00:26:50.749 read: IOPS=1181, BW=295MiB/s (310MB/s)(2960MiB/10025msec) 00:26:50.749 slat (usec): min=12, max=155789, avg=840.53, stdev=3637.49 00:26:50.749 clat (msec): min=18, max=423, avg=53.31, stdev=37.44 00:26:50.749 lat (msec): min=19, max=438, avg=54.15, stdev=37.96 00:26:50.749 clat percentiles (msec): 00:26:50.749 | 1.00th=[ 29], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 39], 00:26:50.749 | 30.00th=[ 40], 40.00th=[ 42], 50.00th=[ 44], 60.00th=[ 47], 00:26:50.749 | 70.00th=[ 50], 80.00th=[ 56], 90.00th=[ 81], 95.00th=[ 106], 00:26:50.749 | 99.00th=[ 284], 99.50th=[ 355], 99.90th=[ 397], 99.95th=[ 405], 00:26:50.749 | 99.99th=[ 422] 00:26:50.749 bw ( KiB/s): min=41984, max=426496, per=32.18%, avg=301461.45, stdev=111125.60, samples=20 00:26:50.749 iops : min= 164, max= 1666, avg=1177.50, stdev=434.11, samples=20 00:26:50.749 lat (msec) : 20=0.05%, 50=70.29%, 100=23.79%, 250=4.78%, 500=1.09% 00:26:50.749 cpu : usr=0.52%, sys=3.42%, ctx=1038, majf=0, minf=4098 00:26:50.749 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:26:50.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.749 issued rwts: total=11841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.749 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.749 job2: (groupid=0, jobs=1): err= 0: pid=3030205: Fri Dec 6 01:04:21 2024 00:26:50.749 read: IOPS=216, BW=54.1MiB/s (56.7MB/s)(559MiB/10341msec) 00:26:50.749 slat (usec): min=13, max=307525, avg=4339.02, stdev=21510.33 00:26:50.749 clat (msec): min=33, max=1318, avg=291.39, stdev=266.40 00:26:50.749 lat (msec): min=33, max=1330, avg=295.73, stdev=270.21 00:26:50.749 clat percentiles (msec): 00:26:50.749 | 1.00th=[ 81], 5.00th=[ 102], 10.00th=[ 111], 20.00th=[ 134], 00:26:50.749 | 30.00th=[ 155], 40.00th=[ 182], 50.00th=[ 207], 60.00th=[ 230], 00:26:50.749 | 70.00th=[ 253], 80.00th=[ 284], 90.00th=[ 785], 95.00th=[ 986], 00:26:50.749 | 99.00th=[ 1200], 99.50th=[ 1301], 99.90th=[ 1301], 99.95th=[ 1301], 00:26:50.749 | 99.99th=[ 1318] 00:26:50.749 bw ( KiB/s): min= 9728, max=135920, per=5.93%, avg=55588.45, stdev=38479.45, samples=20 00:26:50.749 iops : min= 38, max= 530, avg=217.05, stdev=150.19, samples=20 00:26:50.749 lat (msec) : 50=0.18%, 100=4.52%, 250=64.45%, 500=16.95%, 750=2.28% 00:26:50.750 lat (msec) : 1000=7.33%, 2000=4.29% 00:26:50.750 cpu : usr=0.12%, sys=0.76%, ctx=268, majf=0, minf=4097 00:26:50.750 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:26:50.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.750 issued rwts: total=2236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.750 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.750 job3: (groupid=0, jobs=1): err= 0: pid=3030206: Fri Dec 6 01:04:21 2024 00:26:50.750 read: IOPS=135, BW=33.8MiB/s (35.4MB/s)(349MiB/10342msec) 00:26:50.750 slat (usec): min=9, max=298415, avg=6586.16, stdev=28104.95 00:26:50.750 clat (msec): min=52, max=1467, avg=466.76, stdev=316.73 00:26:50.750 lat (msec): min=52, max=1537, avg=473.35, stdev=321.80 00:26:50.750 clat percentiles (msec): 00:26:50.750 | 1.00th=[ 54], 5.00th=[ 105], 10.00th=[ 131], 20.00th=[ 180], 00:26:50.750 | 30.00th=[ 211], 40.00th=[ 253], 50.00th=[ 485], 60.00th=[ 550], 00:26:50.750 | 70.00th=[ 584], 80.00th=[ 676], 90.00th=[ 919], 95.00th=[ 1099], 00:26:50.750 | 99.00th=[ 1401], 99.50th=[ 1469], 99.90th=[ 1469], 99.95th=[ 1469], 00:26:50.750 | 99.99th=[ 1469] 00:26:50.750 bw ( KiB/s): min= 7680, max=97792, per=3.64%, avg=34117.80, stdev=23419.50, samples=20 00:26:50.750 iops : min= 30, max= 382, avg=133.20, stdev=91.48, samples=20 00:26:50.750 lat (msec) : 100=4.58%, 250=35.36%, 500=11.02%, 750=32.43%, 1000=8.95% 00:26:50.750 lat (msec) : 2000=7.66% 00:26:50.750 cpu : usr=0.04%, sys=0.46%, ctx=201, majf=0, minf=4097 00:26:50.750 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:26:50.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.750 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.750 issued rwts: total=1397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.750 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.750 job4: (groupid=0, jobs=1): err= 0: pid=3030207: Fri Dec 6 01:04:21 2024 00:26:50.750 read: IOPS=272, BW=68.1MiB/s (71.5MB/s)(683MiB/10026msec) 00:26:50.750 slat (usec): min=8, max=370714, avg=3183.64, stdev=18067.98 00:26:50.750 clat (usec): min=1497, max=1414.7k, avg=231424.83, stdev=299963.95 00:26:50.750 lat (usec): min=1529, max=1516.0k, avg=234608.47, stdev=304615.92 00:26:50.750 clat percentiles (msec): 00:26:50.750 | 1.00th=[ 8], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 46], 00:26:50.750 | 30.00th=[ 51], 40.00th=[ 54], 50.00th=[ 55], 60.00th=[ 65], 00:26:50.750 | 70.00th=[ 236], 80.00th=[ 550], 90.00th=[ 684], 95.00th=[ 852], 00:26:50.750 | 99.00th=[ 1150], 99.50th=[ 1200], 99.90th=[ 1250], 99.95th=[ 1284], 00:26:50.750 | 99.99th=[ 1418] 00:26:50.750 bw ( KiB/s): min=14848, max=311808, per=7.30%, avg=68345.30, stdev=92554.00, samples=20 00:26:50.750 iops : min= 58, max= 1218, avg=266.90, stdev=361.57, samples=20 00:26:50.750 lat (msec) : 2=0.07%, 4=0.55%, 10=0.59%, 20=0.91%, 50=27.04% 00:26:50.750 lat (msec) : 100=38.79%, 250=2.23%, 500=6.33%, 750=15.26%, 1000=5.27% 00:26:50.750 lat (msec) : 2000=2.96% 00:26:50.750 cpu : usr=0.14%, sys=0.91%, ctx=416, majf=0, minf=4097 00:26:50.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:50.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.750 issued rwts: total=2733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.750 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.750 job5: (groupid=0, jobs=1): err= 0: pid=3030208: Fri Dec 6 01:04:21 2024 00:26:50.750 read: IOPS=523, BW=131MiB/s (137MB/s)(1355MiB/10351msec) 00:26:50.750 slat (usec): min=12, max=1003.8k, avg=1845.04, stdev=15956.21 00:26:50.750 clat (msec): min=28, max=1552, avg=120.31, stdev=139.51 00:26:50.750 lat (msec): min=28, max=1552, avg=122.16, stdev=141.25 00:26:50.750 clat percentiles (msec): 00:26:50.750 | 1.00th=[ 47], 5.00th=[ 61], 10.00th=[ 67], 20.00th=[ 72], 00:26:50.750 | 30.00th=[ 77], 40.00th=[ 82], 50.00th=[ 86], 60.00th=[ 90], 00:26:50.750 | 70.00th=[ 95], 80.00th=[ 111], 90.00th=[ 211], 95.00th=[ 317], 00:26:50.750 | 99.00th=[ 1083], 99.50th=[ 1200], 99.90th=[ 1536], 99.95th=[ 1536], 00:26:50.750 | 99.99th=[ 1552] 00:26:50.750 bw ( KiB/s): min=28160, max=214528, per=15.40%, avg=144264.95, stdev=65250.36, samples=19 00:26:50.750 iops : min= 110, max= 838, avg=563.47, stdev=254.96, samples=19 00:26:50.750 lat (msec) : 50=1.79%, 100=72.35%, 250=18.05%, 500=6.33%, 750=0.31% 00:26:50.750 lat (msec) : 2000=1.16% 00:26:50.750 cpu : usr=0.22%, sys=1.70%, ctx=375, majf=0, minf=4097 00:26:50.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:50.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.750 issued rwts: total=5418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.750 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.750 job6: (groupid=0, jobs=1): err= 0: pid=3030209: Fri Dec 6 01:04:21 2024 00:26:50.750 read: IOPS=265, BW=66.4MiB/s (69.6MB/s)(687MiB/10345msec) 00:26:50.750 slat (usec): min=12, max=892400, avg=3489.54, stdev=25229.54 00:26:50.750 clat (msec): min=37, max=1981, avg=237.42, stdev=250.04 00:26:50.750 lat (msec): min=41, max=1981, avg=240.91, stdev=253.42 00:26:50.750 clat percentiles (msec): 00:26:50.750 | 1.00th=[ 68], 5.00th=[ 83], 10.00th=[ 90], 20.00th=[ 113], 00:26:50.750 | 30.00th=[ 118], 40.00th=[ 136], 50.00th=[ 161], 60.00th=[ 199], 00:26:50.750 | 70.00th=[ 230], 80.00th=[ 259], 90.00th=[ 401], 95.00th=[ 785], 00:26:50.750 | 99.00th=[ 1569], 99.50th=[ 1569], 99.90th=[ 1569], 99.95th=[ 1636], 00:26:50.750 | 99.99th=[ 1989] 00:26:50.750 bw ( KiB/s): min= 3584, max=164864, per=7.33%, avg=68651.95, stdev=47524.89, samples=20 00:26:50.750 iops : min= 14, max= 644, avg=268.10, stdev=185.68, samples=20 00:26:50.750 lat (msec) : 50=0.36%, 100=12.89%, 250=64.79%, 500=16.39%, 750=0.29% 00:26:50.750 lat (msec) : 1000=1.42%, 2000=3.86% 00:26:50.750 cpu : usr=0.12%, sys=0.83%, ctx=262, majf=0, minf=4097 00:26:50.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:50.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.750 issued rwts: total=2746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.750 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.750 job7: (groupid=0, jobs=1): err= 0: pid=3030215: Fri Dec 6 01:04:21 2024 00:26:50.750 read: IOPS=205, BW=51.3MiB/s (53.8MB/s)(531MiB/10345msec) 00:26:50.750 slat (usec): min=8, max=586839, avg=3481.34, stdev=25520.70 00:26:50.750 clat (msec): min=22, max=1540, avg=308.08, stdev=324.45 00:26:50.750 lat (msec): min=22, max=1540, avg=311.56, stdev=327.51 00:26:50.750 clat percentiles (msec): 00:26:50.750 | 1.00th=[ 28], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 49], 00:26:50.750 | 30.00th=[ 144], 40.00th=[ 176], 50.00th=[ 207], 60.00th=[ 234], 00:26:50.750 | 70.00th=[ 271], 80.00th=[ 435], 90.00th=[ 844], 95.00th=[ 1028], 00:26:50.750 | 99.00th=[ 1401], 99.50th=[ 1401], 99.90th=[ 1536], 99.95th=[ 1536], 00:26:50.750 | 99.99th=[ 1536] 00:26:50.750 bw ( KiB/s): min= 3072, max=262656, per=5.63%, avg=52700.20, stdev=58018.22, samples=20 00:26:50.750 iops : min= 12, max= 1026, avg=205.80, stdev=226.64, samples=20 00:26:50.750 lat (msec) : 50=20.02%, 100=4.47%, 250=40.27%, 500=16.30%, 750=6.74% 00:26:50.750 lat (msec) : 1000=4.33%, 2000=7.87% 00:26:50.750 cpu : usr=0.09%, sys=0.47%, ctx=276, majf=0, minf=4097 00:26:50.750 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:26:50.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.750 issued rwts: total=2123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.750 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.750 job8: (groupid=0, jobs=1): err= 0: pid=3030217: Fri Dec 6 01:04:21 2024 00:26:50.750 read: IOPS=475, BW=119MiB/s (125MB/s)(1230MiB/10351msec) 00:26:50.750 slat (usec): min=8, max=472591, avg=1569.01, stdev=11440.52 00:26:50.750 clat (msec): min=2, max=914, avg=132.96, stdev=136.52 00:26:50.750 lat (msec): min=2, max=915, avg=134.53, stdev=138.22 00:26:50.750 clat percentiles (msec): 00:26:50.750 | 1.00th=[ 12], 5.00th=[ 31], 10.00th=[ 42], 20.00th=[ 48], 00:26:50.750 | 30.00th=[ 54], 40.00th=[ 71], 50.00th=[ 85], 60.00th=[ 97], 00:26:50.750 | 70.00th=[ 125], 80.00th=[ 182], 90.00th=[ 309], 95.00th=[ 523], 00:26:50.750 | 99.00th=[ 634], 99.50th=[ 642], 99.90th=[ 676], 99.95th=[ 810], 00:26:50.750 | 99.99th=[ 919] 00:26:50.750 bw ( KiB/s): min=31232, max=311296, per=13.27%, avg=124330.55, stdev=91234.26, samples=20 00:26:50.750 iops : min= 122, max= 1216, avg=485.60, stdev=356.45, samples=20 00:26:50.750 lat (msec) : 4=0.22%, 10=0.69%, 20=1.02%, 50=23.47%, 100=35.36% 00:26:50.750 lat (msec) : 250=25.12%, 500=8.37%, 750=5.67%, 1000=0.08% 00:26:50.750 cpu : usr=0.18%, sys=1.08%, ctx=771, majf=0, minf=3721 00:26:50.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:50.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.750 issued rwts: total=4921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.750 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.750 job9: (groupid=0, jobs=1): err= 0: pid=3030218: Fri Dec 6 01:04:21 2024 00:26:50.750 read: IOPS=169, BW=42.4MiB/s (44.5MB/s)(439MiB/10350msec) 00:26:50.750 slat (usec): min=8, max=427971, avg=4193.18, stdev=25191.09 00:26:50.750 clat (msec): min=24, max=1531, avg=372.39, stdev=324.78 00:26:50.750 lat (msec): min=24, max=1531, avg=376.59, stdev=329.96 00:26:50.750 clat percentiles (msec): 00:26:50.750 | 1.00th=[ 34], 5.00th=[ 65], 10.00th=[ 71], 20.00th=[ 91], 00:26:50.750 | 30.00th=[ 120], 40.00th=[ 167], 50.00th=[ 243], 60.00th=[ 359], 00:26:50.750 | 70.00th=[ 567], 80.00th=[ 659], 90.00th=[ 827], 95.00th=[ 1020], 00:26:50.750 | 99.00th=[ 1250], 99.50th=[ 1485], 99.90th=[ 1485], 99.95th=[ 1536], 00:26:50.750 | 99.99th=[ 1536] 00:26:50.750 bw ( KiB/s): min= 1536, max=168622, per=4.62%, avg=43309.20, stdev=43165.37, samples=20 00:26:50.750 iops : min= 6, max= 658, avg=169.10, stdev=168.40, samples=20 00:26:50.750 lat (msec) : 50=3.19%, 100=19.98%, 250=29.08%, 500=13.94%, 750=20.15% 00:26:50.750 lat (msec) : 1000=7.17%, 2000=6.49% 00:26:50.750 cpu : usr=0.07%, sys=0.51%, ctx=226, majf=0, minf=4097 00:26:50.750 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:26:50.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.751 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.751 issued rwts: total=1757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.751 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.751 job10: (groupid=0, jobs=1): err= 0: pid=3030219: Fri Dec 6 01:04:21 2024 00:26:50.751 read: IOPS=106, BW=26.6MiB/s (27.9MB/s)(276MiB/10348msec) 00:26:50.751 slat (usec): min=10, max=267827, avg=8829.41, stdev=30688.22 00:26:50.751 clat (msec): min=39, max=1741, avg=590.99, stdev=289.61 00:26:50.751 lat (msec): min=39, max=1741, avg=599.82, stdev=294.55 00:26:50.751 clat percentiles (msec): 00:26:50.751 | 1.00th=[ 41], 5.00th=[ 205], 10.00th=[ 247], 20.00th=[ 368], 00:26:50.751 | 30.00th=[ 414], 40.00th=[ 531], 50.00th=[ 567], 60.00th=[ 609], 00:26:50.751 | 70.00th=[ 667], 80.00th=[ 785], 90.00th=[ 1003], 95.00th=[ 1200], 00:26:50.751 | 99.00th=[ 1536], 99.50th=[ 1536], 99.90th=[ 1536], 99.95th=[ 1737], 00:26:50.751 | 99.99th=[ 1737] 00:26:50.751 bw ( KiB/s): min=11776, max=58880, per=2.84%, avg=26591.05, stdev=12378.30, samples=20 00:26:50.751 iops : min= 46, max= 230, avg=103.80, stdev=48.29, samples=20 00:26:50.751 lat (msec) : 50=1.81%, 250=8.88%, 500=25.11%, 750=42.34%, 1000=12.24% 00:26:50.751 lat (msec) : 2000=9.61% 00:26:50.751 cpu : usr=0.05%, sys=0.39%, ctx=138, majf=0, minf=4097 00:26:50.751 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.3% 00:26:50.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.751 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.751 issued rwts: total=1103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.751 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.751 00:26:50.751 Run status group 0 (all jobs): 00:26:50.751 READ: bw=915MiB/s (959MB/s), 26.6MiB/s-295MiB/s (27.9MB/s-310MB/s), io=9469MiB (9929MB), run=10025-10351msec 00:26:50.751 00:26:50.751 Disk stats (read/write): 00:26:50.751 nvme0n1: ios=3115/0, merge=0/0, ticks=1231041/0, in_queue=1231041, util=97.32% 00:26:50.751 nvme10n1: ios=23347/0, merge=0/0, ticks=1243794/0, in_queue=1243794, util=97.47% 00:26:50.751 nvme1n1: ios=4403/0, merge=0/0, ticks=1237755/0, in_queue=1237755, util=97.79% 00:26:50.751 nvme2n1: ios=2746/0, merge=0/0, ticks=1245067/0, in_queue=1245067, util=97.93% 00:26:50.751 nvme3n1: ios=5139/0, merge=0/0, ticks=1243255/0, in_queue=1243255, util=97.95% 00:26:50.751 nvme4n1: ios=10719/0, merge=0/0, ticks=1241515/0, in_queue=1241515, util=98.30% 00:26:50.751 nvme5n1: ios=5366/0, merge=0/0, ticks=1173527/0, in_queue=1173527, util=98.45% 00:26:50.751 nvme6n1: ios=4163/0, merge=0/0, ticks=1211740/0, in_queue=1211740, util=98.56% 00:26:50.751 nvme7n1: ios=9715/0, merge=0/0, ticks=1230473/0, in_queue=1230473, util=98.97% 00:26:50.751 nvme8n1: ios=3418/0, merge=0/0, ticks=1224975/0, in_queue=1224975, util=99.13% 00:26:50.751 nvme9n1: ios=2149/0, merge=0/0, ticks=1244725/0, in_queue=1244725, util=99.25% 00:26:50.751 01:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:50.751 [global] 00:26:50.751 thread=1 00:26:50.751 invalidate=1 00:26:50.751 rw=randwrite 00:26:50.751 time_based=1 00:26:50.751 runtime=10 00:26:50.751 ioengine=libaio 00:26:50.751 direct=1 00:26:50.751 bs=262144 00:26:50.751 iodepth=64 00:26:50.751 norandommap=1 00:26:50.751 numjobs=1 00:26:50.751 00:26:50.751 [job0] 00:26:50.751 filename=/dev/nvme0n1 00:26:50.751 [job1] 00:26:50.751 filename=/dev/nvme10n1 00:26:50.751 [job2] 00:26:50.751 filename=/dev/nvme1n1 00:26:50.751 [job3] 00:26:50.751 filename=/dev/nvme2n1 00:26:50.751 [job4] 00:26:50.751 filename=/dev/nvme3n1 00:26:50.751 [job5] 00:26:50.751 filename=/dev/nvme4n1 00:26:50.751 [job6] 00:26:50.751 filename=/dev/nvme5n1 00:26:50.751 [job7] 00:26:50.751 filename=/dev/nvme6n1 00:26:50.751 [job8] 00:26:50.751 filename=/dev/nvme7n1 00:26:50.751 [job9] 00:26:50.751 filename=/dev/nvme8n1 00:26:50.751 [job10] 00:26:50.751 filename=/dev/nvme9n1 00:26:50.751 Could not set queue depth (nvme0n1) 00:26:50.751 Could not set queue depth (nvme10n1) 00:26:50.751 Could not set queue depth (nvme1n1) 00:26:50.751 Could not set queue depth (nvme2n1) 00:26:50.751 Could not set queue depth (nvme3n1) 00:26:50.751 Could not set queue depth (nvme4n1) 00:26:50.751 Could not set queue depth (nvme5n1) 00:26:50.751 Could not set queue depth (nvme6n1) 00:26:50.751 Could not set queue depth (nvme7n1) 00:26:50.751 Could not set queue depth (nvme8n1) 00:26:50.751 Could not set queue depth (nvme9n1) 00:26:50.751 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.751 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.751 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.751 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.751 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.751 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.751 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.751 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.751 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.751 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.751 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.751 fio-3.35 00:26:50.751 Starting 11 threads 00:27:00.727 00:27:00.727 job0: (groupid=0, jobs=1): err= 0: pid=3030943: Fri Dec 6 01:04:32 2024 00:27:00.727 write: IOPS=318, BW=79.7MiB/s (83.6MB/s)(814MiB/10201msec); 0 zone resets 00:27:00.727 slat (usec): min=19, max=218367, avg=1512.88, stdev=7133.25 00:27:00.727 clat (usec): min=1246, max=751918, avg=199006.87, stdev=163162.18 00:27:00.727 lat (usec): min=1279, max=785635, avg=200519.75, stdev=164861.93 00:27:00.727 clat percentiles (msec): 00:27:00.727 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 24], 20.00th=[ 45], 00:27:00.727 | 30.00th=[ 70], 40.00th=[ 115], 50.00th=[ 184], 60.00th=[ 211], 00:27:00.727 | 70.00th=[ 247], 80.00th=[ 321], 90.00th=[ 464], 95.00th=[ 535], 00:27:00.727 | 99.00th=[ 667], 99.50th=[ 693], 99.90th=[ 735], 99.95th=[ 735], 00:27:00.727 | 99.99th=[ 751] 00:27:00.727 bw ( KiB/s): min=30720, max=193024, per=9.39%, avg=81698.20, stdev=42981.80, samples=20 00:27:00.727 iops : min= 120, max= 754, avg=319.10, stdev=167.89, samples=20 00:27:00.727 lat (msec) : 2=0.22%, 4=0.58%, 10=2.24%, 20=5.13%, 50=14.44% 00:27:00.727 lat (msec) : 100=14.38%, 250=33.80%, 500=21.60%, 750=7.56%, 1000=0.03% 00:27:00.727 cpu : usr=0.94%, sys=1.25%, ctx=2418, majf=0, minf=1 00:27:00.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:27:00.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.727 issued rwts: total=0,3254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.727 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.727 job1: (groupid=0, jobs=1): err= 0: pid=3030955: Fri Dec 6 01:04:32 2024 00:27:00.727 write: IOPS=266, BW=66.6MiB/s (69.8MB/s)(676MiB/10144msec); 0 zone resets 00:27:00.727 slat (usec): min=19, max=340719, avg=2465.65, stdev=9399.07 00:27:00.727 clat (msec): min=5, max=722, avg=237.68, stdev=145.01 00:27:00.727 lat (msec): min=5, max=722, avg=240.14, stdev=146.63 00:27:00.727 clat percentiles (msec): 00:27:00.727 | 1.00th=[ 13], 5.00th=[ 42], 10.00th=[ 77], 20.00th=[ 125], 00:27:00.727 | 30.00th=[ 155], 40.00th=[ 180], 50.00th=[ 201], 60.00th=[ 241], 00:27:00.727 | 70.00th=[ 288], 80.00th=[ 347], 90.00th=[ 464], 95.00th=[ 542], 00:27:00.727 | 99.00th=[ 609], 99.50th=[ 651], 99.90th=[ 726], 99.95th=[ 726], 00:27:00.727 | 99.99th=[ 726] 00:27:00.727 bw ( KiB/s): min=16384, max=135680, per=7.77%, avg=67558.40, stdev=29993.93, samples=20 00:27:00.727 iops : min= 64, max= 530, avg=263.90, stdev=117.16, samples=20 00:27:00.727 lat (msec) : 10=0.63%, 20=1.55%, 50=4.33%, 100=9.03%, 250=46.74% 00:27:00.727 lat (msec) : 500=30.13%, 750=7.59% 00:27:00.727 cpu : usr=0.85%, sys=0.77%, ctx=1480, majf=0, minf=1 00:27:00.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:27:00.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.727 issued rwts: total=0,2702,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.727 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.727 job2: (groupid=0, jobs=1): err= 0: pid=3030958: Fri Dec 6 01:04:32 2024 00:27:00.727 write: IOPS=270, BW=67.7MiB/s (71.0MB/s)(683MiB/10087msec); 0 zone resets 00:27:00.727 slat (usec): min=23, max=339410, avg=2903.34, stdev=11147.77 00:27:00.727 clat (msec): min=9, max=611, avg=233.23, stdev=153.13 00:27:00.727 lat (msec): min=9, max=611, avg=236.14, stdev=154.98 00:27:00.727 clat percentiles (msec): 00:27:00.727 | 1.00th=[ 17], 5.00th=[ 32], 10.00th=[ 50], 20.00th=[ 97], 00:27:00.727 | 30.00th=[ 133], 40.00th=[ 167], 50.00th=[ 190], 60.00th=[ 232], 00:27:00.728 | 70.00th=[ 309], 80.00th=[ 384], 90.00th=[ 481], 95.00th=[ 523], 00:27:00.728 | 99.00th=[ 584], 99.50th=[ 592], 99.90th=[ 609], 99.95th=[ 609], 00:27:00.728 | 99.99th=[ 609] 00:27:00.728 bw ( KiB/s): min=22573, max=164864, per=7.85%, avg=68303.05, stdev=36080.01, samples=20 00:27:00.728 iops : min= 88, max= 644, avg=266.80, stdev=140.95, samples=20 00:27:00.728 lat (msec) : 10=0.04%, 20=1.54%, 50=8.79%, 100=10.18%, 250=41.60% 00:27:00.728 lat (msec) : 500=30.79%, 750=7.07% 00:27:00.728 cpu : usr=0.75%, sys=0.99%, ctx=1367, majf=0, minf=1 00:27:00.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:27:00.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.728 issued rwts: total=0,2731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.728 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.728 job3: (groupid=0, jobs=1): err= 0: pid=3030959: Fri Dec 6 01:04:32 2024 00:27:00.728 write: IOPS=212, BW=53.1MiB/s (55.7MB/s)(541MiB/10185msec); 0 zone resets 00:27:00.728 slat (usec): min=22, max=124885, avg=3389.19, stdev=8715.53 00:27:00.728 clat (usec): min=1115, max=709052, avg=297721.24, stdev=147299.71 00:27:00.728 lat (usec): min=1164, max=714321, avg=301110.43, stdev=148958.99 00:27:00.728 clat percentiles (msec): 00:27:00.728 | 1.00th=[ 21], 5.00th=[ 72], 10.00th=[ 111], 20.00th=[ 182], 00:27:00.728 | 30.00th=[ 220], 40.00th=[ 247], 50.00th=[ 284], 60.00th=[ 317], 00:27:00.728 | 70.00th=[ 359], 80.00th=[ 426], 90.00th=[ 477], 95.00th=[ 609], 00:27:00.728 | 99.00th=[ 676], 99.50th=[ 693], 99.90th=[ 701], 99.95th=[ 701], 00:27:00.728 | 99.99th=[ 709] 00:27:00.728 bw ( KiB/s): min=23040, max=83456, per=6.18%, avg=53768.30, stdev=19364.68, samples=20 00:27:00.728 iops : min= 90, max= 326, avg=210.00, stdev=75.59, samples=20 00:27:00.728 lat (msec) : 2=0.32%, 4=0.14%, 10=0.09%, 20=0.42%, 50=2.31% 00:27:00.728 lat (msec) : 100=5.59%, 250=32.27%, 500=51.04%, 750=7.81% 00:27:00.728 cpu : usr=0.62%, sys=0.80%, ctx=1084, majf=0, minf=1 00:27:00.728 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:27:00.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.728 issued rwts: total=0,2163,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.728 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.728 job4: (groupid=0, jobs=1): err= 0: pid=3030960: Fri Dec 6 01:04:32 2024 00:27:00.728 write: IOPS=247, BW=61.9MiB/s (64.9MB/s)(631MiB/10184msec); 0 zone resets 00:27:00.728 slat (usec): min=23, max=96066, avg=3273.88, stdev=7859.05 00:27:00.728 clat (msec): min=6, max=630, avg=254.77, stdev=132.30 00:27:00.728 lat (msec): min=6, max=630, avg=258.04, stdev=133.54 00:27:00.728 clat percentiles (msec): 00:27:00.728 | 1.00th=[ 28], 5.00th=[ 60], 10.00th=[ 84], 20.00th=[ 129], 00:27:00.728 | 30.00th=[ 182], 40.00th=[ 205], 50.00th=[ 230], 60.00th=[ 271], 00:27:00.728 | 70.00th=[ 313], 80.00th=[ 388], 90.00th=[ 464], 95.00th=[ 489], 00:27:00.728 | 99.00th=[ 527], 99.50th=[ 567], 99.90th=[ 625], 99.95th=[ 634], 00:27:00.728 | 99.99th=[ 634] 00:27:00.728 bw ( KiB/s): min=32768, max=164352, per=7.24%, avg=62979.80, stdev=33012.93, samples=20 00:27:00.728 iops : min= 128, max= 642, avg=246.00, stdev=128.97, samples=20 00:27:00.728 lat (msec) : 10=0.28%, 20=0.12%, 50=1.90%, 100=9.87%, 250=41.89% 00:27:00.728 lat (msec) : 500=41.97%, 750=3.96% 00:27:00.728 cpu : usr=0.73%, sys=0.92%, ctx=963, majf=0, minf=1 00:27:00.728 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:27:00.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.728 issued rwts: total=0,2523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.728 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.728 job5: (groupid=0, jobs=1): err= 0: pid=3030962: Fri Dec 6 01:04:32 2024 00:27:00.728 write: IOPS=305, BW=76.4MiB/s (80.1MB/s)(771MiB/10094msec); 0 zone resets 00:27:00.728 slat (usec): min=17, max=87879, avg=1778.76, stdev=5795.92 00:27:00.728 clat (usec): min=1188, max=592721, avg=207549.16, stdev=133504.06 00:27:00.728 lat (usec): min=1221, max=592760, avg=209327.92, stdev=134660.48 00:27:00.728 clat percentiles (msec): 00:27:00.728 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 24], 20.00th=[ 86], 00:27:00.728 | 30.00th=[ 127], 40.00th=[ 174], 50.00th=[ 199], 60.00th=[ 230], 00:27:00.728 | 70.00th=[ 271], 80.00th=[ 309], 90.00th=[ 405], 95.00th=[ 460], 00:27:00.728 | 99.00th=[ 542], 99.50th=[ 558], 99.90th=[ 584], 99.95th=[ 592], 00:27:00.728 | 99.99th=[ 592] 00:27:00.728 bw ( KiB/s): min=38912, max=123904, per=8.89%, avg=77367.05, stdev=24882.09, samples=20 00:27:00.728 iops : min= 152, max= 484, avg=302.20, stdev=97.22, samples=20 00:27:00.728 lat (msec) : 2=0.26%, 4=1.59%, 10=4.25%, 20=2.95%, 50=6.90% 00:27:00.728 lat (msec) : 100=6.06%, 250=42.04%, 500=32.90%, 750=3.05% 00:27:00.728 cpu : usr=0.81%, sys=1.11%, ctx=2009, majf=0, minf=1 00:27:00.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:27:00.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.728 issued rwts: total=0,3085,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.728 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.728 job6: (groupid=0, jobs=1): err= 0: pid=3030963: Fri Dec 6 01:04:32 2024 00:27:00.728 write: IOPS=385, BW=96.4MiB/s (101MB/s)(973MiB/10093msec); 0 zone resets 00:27:00.728 slat (usec): min=17, max=75216, avg=1505.54, stdev=4840.14 00:27:00.728 clat (usec): min=940, max=564027, avg=164465.89, stdev=121528.13 00:27:00.728 lat (usec): min=976, max=564078, avg=165971.43, stdev=122448.87 00:27:00.728 clat percentiles (msec): 00:27:00.728 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 40], 20.00th=[ 64], 00:27:00.728 | 30.00th=[ 87], 40.00th=[ 114], 50.00th=[ 132], 60.00th=[ 155], 00:27:00.728 | 70.00th=[ 201], 80.00th=[ 249], 90.00th=[ 359], 95.00th=[ 430], 00:27:00.728 | 99.00th=[ 502], 99.50th=[ 531], 99.90th=[ 567], 99.95th=[ 567], 00:27:00.728 | 99.99th=[ 567] 00:27:00.728 bw ( KiB/s): min=36352, max=269312, per=11.26%, avg=97971.20, stdev=60785.12, samples=20 00:27:00.728 iops : min= 142, max= 1052, avg=382.70, stdev=237.44, samples=20 00:27:00.728 lat (usec) : 1000=0.03% 00:27:00.728 lat (msec) : 2=0.31%, 4=0.44%, 10=2.26%, 20=2.62%, 50=6.99% 00:27:00.728 lat (msec) : 100=22.01%, 250=45.42%, 500=18.82%, 750=1.11% 00:27:00.728 cpu : usr=1.00%, sys=1.40%, ctx=2341, majf=0, minf=1 00:27:00.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:27:00.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.728 issued rwts: total=0,3890,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.728 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.728 job7: (groupid=0, jobs=1): err= 0: pid=3030964: Fri Dec 6 01:04:32 2024 00:27:00.728 write: IOPS=313, BW=78.3MiB/s (82.1MB/s)(797MiB/10185msec); 0 zone resets 00:27:00.728 slat (usec): min=21, max=60153, avg=2730.22, stdev=6248.57 00:27:00.728 clat (msec): min=13, max=618, avg=201.57, stdev=126.02 00:27:00.728 lat (msec): min=13, max=628, avg=204.30, stdev=127.37 00:27:00.728 clat percentiles (msec): 00:27:00.728 | 1.00th=[ 54], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 101], 00:27:00.728 | 30.00th=[ 113], 40.00th=[ 132], 50.00th=[ 159], 60.00th=[ 209], 00:27:00.728 | 70.00th=[ 262], 80.00th=[ 300], 90.00th=[ 384], 95.00th=[ 464], 00:27:00.728 | 99.00th=[ 575], 99.50th=[ 592], 99.90th=[ 617], 99.95th=[ 617], 00:27:00.728 | 99.99th=[ 617] 00:27:00.728 bw ( KiB/s): min=30208, max=211968, per=9.20%, avg=80025.60, stdev=46704.57, samples=20 00:27:00.728 iops : min= 118, max= 828, avg=312.60, stdev=182.44, samples=20 00:27:00.728 lat (msec) : 20=0.19%, 50=0.25%, 100=19.94%, 250=47.91%, 500=28.22% 00:27:00.728 lat (msec) : 750=3.48% 00:27:00.728 cpu : usr=0.90%, sys=1.06%, ctx=987, majf=0, minf=1 00:27:00.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:27:00.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.728 issued rwts: total=0,3189,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.728 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.728 job8: (groupid=0, jobs=1): err= 0: pid=3030965: Fri Dec 6 01:04:32 2024 00:27:00.728 write: IOPS=358, BW=89.6MiB/s (93.9MB/s)(915MiB/10213msec); 0 zone resets 00:27:00.728 slat (usec): min=18, max=218171, avg=2141.37, stdev=7223.65 00:27:00.728 clat (usec): min=1113, max=694004, avg=175955.52, stdev=147717.86 00:27:00.728 lat (usec): min=1156, max=694064, avg=178096.89, stdev=149372.82 00:27:00.728 clat percentiles (msec): 00:27:00.728 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 23], 20.00th=[ 63], 00:27:00.728 | 30.00th=[ 73], 40.00th=[ 101], 50.00th=[ 124], 60.00th=[ 153], 00:27:00.728 | 70.00th=[ 220], 80.00th=[ 305], 90.00th=[ 422], 95.00th=[ 489], 00:27:00.728 | 99.00th=[ 567], 99.50th=[ 609], 99.90th=[ 667], 99.95th=[ 693], 00:27:00.728 | 99.99th=[ 693] 00:27:00.728 bw ( KiB/s): min=26112, max=235520, per=10.58%, avg=92040.35, stdev=52759.53, samples=20 00:27:00.728 iops : min= 102, max= 920, avg=359.50, stdev=206.10, samples=20 00:27:00.728 lat (msec) : 2=0.19%, 4=1.48%, 10=4.40%, 20=3.77%, 50=5.14% 00:27:00.728 lat (msec) : 100=25.01%, 250=33.62%, 500=22.08%, 750=4.32% 00:27:00.728 cpu : usr=1.10%, sys=1.17%, ctx=1756, majf=0, minf=1 00:27:00.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:27:00.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.728 issued rwts: total=0,3659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.728 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.728 job9: (groupid=0, jobs=1): err= 0: pid=3030966: Fri Dec 6 01:04:32 2024 00:27:00.728 write: IOPS=336, BW=84.1MiB/s (88.2MB/s)(858MiB/10200msec); 0 zone resets 00:27:00.728 slat (usec): min=20, max=173087, avg=1542.88, stdev=6207.09 00:27:00.728 clat (usec): min=1131, max=743064, avg=188438.48, stdev=156892.55 00:27:00.728 lat (usec): min=1167, max=760409, avg=189981.36, stdev=158334.06 00:27:00.728 clat percentiles (usec): 00:27:00.728 | 1.00th=[ 1958], 5.00th=[ 5211], 10.00th=[ 13304], 20.00th=[ 34866], 00:27:00.728 | 30.00th=[ 77071], 40.00th=[109577], 50.00th=[137364], 60.00th=[187696], 00:27:00.728 | 70.00th=[270533], 80.00th=[346031], 90.00th=[434111], 95.00th=[467665], 00:27:00.729 | 99.00th=[566232], 99.50th=[650118], 99.90th=[742392], 99.95th=[742392], 00:27:00.729 | 99.99th=[742392] 00:27:00.729 bw ( KiB/s): min=28672, max=192000, per=9.91%, avg=86249.35, stdev=50193.53, samples=20 00:27:00.729 iops : min= 112, max= 750, avg=336.90, stdev=196.08, samples=20 00:27:00.729 lat (msec) : 2=1.05%, 4=2.36%, 10=5.59%, 20=3.53%, 50=10.49% 00:27:00.729 lat (msec) : 100=13.46%, 250=30.94%, 500=29.72%, 750=2.86% 00:27:00.729 cpu : usr=0.91%, sys=1.26%, ctx=2411, majf=0, minf=1 00:27:00.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:27:00.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.729 issued rwts: total=0,3432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.729 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.729 job10: (groupid=0, jobs=1): err= 0: pid=3030967: Fri Dec 6 01:04:32 2024 00:27:00.729 write: IOPS=401, BW=100MiB/s (105MB/s)(1020MiB/10175msec); 0 zone resets 00:27:00.729 slat (usec): min=22, max=160292, avg=551.35, stdev=4532.35 00:27:00.729 clat (usec): min=1047, max=728517, avg=158937.59, stdev=140182.61 00:27:00.729 lat (usec): min=1119, max=728581, avg=159488.94, stdev=140555.70 00:27:00.729 clat percentiles (msec): 00:27:00.729 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 19], 20.00th=[ 37], 00:27:00.729 | 30.00th=[ 74], 40.00th=[ 108], 50.00th=[ 125], 60.00th=[ 150], 00:27:00.729 | 70.00th=[ 192], 80.00th=[ 243], 90.00th=[ 347], 95.00th=[ 460], 00:27:00.729 | 99.00th=[ 642], 99.50th=[ 667], 99.90th=[ 709], 99.95th=[ 718], 00:27:00.729 | 99.99th=[ 726] 00:27:00.729 bw ( KiB/s): min=37376, max=230912, per=11.82%, avg=102869.85, stdev=50879.27, samples=20 00:27:00.729 iops : min= 146, max= 902, avg=401.80, stdev=198.76, samples=20 00:27:00.729 lat (msec) : 2=0.27%, 4=0.66%, 10=4.48%, 20=5.37%, 50=14.02% 00:27:00.729 lat (msec) : 100=12.57%, 250=43.71%, 500=14.46%, 750=4.46% 00:27:00.729 cpu : usr=1.21%, sys=1.47%, ctx=3519, majf=0, minf=2 00:27:00.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:00.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.729 issued rwts: total=0,4081,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.729 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.729 00:27:00.729 Run status group 0 (all jobs): 00:27:00.729 WRITE: bw=850MiB/s (891MB/s), 53.1MiB/s-100MiB/s (55.7MB/s-105MB/s), io=8677MiB (9099MB), run=10087-10213msec 00:27:00.729 00:27:00.729 Disk stats (read/write): 00:27:00.729 nvme0n1: ios=49/6482, merge=0/0, ticks=194/1251434, in_queue=1251628, util=98.92% 00:27:00.729 nvme10n1: ios=44/5225, merge=0/0, ticks=162/1204563, in_queue=1204725, util=98.66% 00:27:00.729 nvme1n1: ios=46/5259, merge=0/0, ticks=4344/1183875, in_queue=1188219, util=100.00% 00:27:00.729 nvme2n1: ios=49/4315, merge=0/0, ticks=375/1245884, in_queue=1246259, util=100.00% 00:27:00.729 nvme3n1: ios=47/5037, merge=0/0, ticks=1292/1241279, in_queue=1242571, util=100.00% 00:27:00.729 nvme4n1: ios=43/5970, merge=0/0, ticks=112/1228218, in_queue=1228330, util=99.06% 00:27:00.729 nvme5n1: ios=43/7568, merge=0/0, ticks=96/1225928, in_queue=1226024, util=98.65% 00:27:00.729 nvme6n1: ios=0/6369, merge=0/0, ticks=0/1240288, in_queue=1240288, util=98.42% 00:27:00.729 nvme7n1: ios=40/7264, merge=0/0, ticks=4260/1221114, in_queue=1225374, util=100.00% 00:27:00.729 nvme8n1: ios=42/6838, merge=0/0, ticks=1856/1252198, in_queue=1254054, util=100.00% 00:27:00.729 nvme9n1: ios=0/7957, merge=0/0, ticks=0/1220134, in_queue=1220134, util=99.08% 00:27:00.729 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:00.729 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:00.729 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:00.729 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:00.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:00.729 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:00.729 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:00.729 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:00.729 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:27:00.729 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:00.729 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:27:00.729 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:00.729 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:00.729 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.729 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.729 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.729 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:00.729 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:00.989 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:00.989 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:00.989 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:00.989 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:00.989 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:27:00.989 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:00.989 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:27:00.989 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:00.989 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:00.989 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.989 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.989 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.989 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:00.989 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:01.248 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:01.248 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:01.248 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:01.248 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:01.249 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:27:01.249 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:01.249 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:27:01.249 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:01.249 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:01.249 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.249 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:01.249 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.249 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:01.249 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:01.509 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:01.509 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:01.509 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:01.509 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:01.509 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:27:01.509 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:01.509 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:27:01.509 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:01.509 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:01.509 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.509 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:01.509 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.509 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:01.509 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:01.770 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:01.770 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:01.770 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:01.770 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:01.770 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:27:01.770 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:01.770 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:27:01.770 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:01.770 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:01.770 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.770 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:01.770 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.770 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:01.770 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:02.029 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:02.029 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:02.029 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:02.029 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:02.029 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:27:02.029 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:02.029 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:27:02.029 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:02.029 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:02.029 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.029 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.029 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.029 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:02.029 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:02.288 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:02.288 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:02.288 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:02.288 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:02.288 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:27:02.288 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:02.288 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:27:02.289 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:02.289 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:02.289 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.289 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.289 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.289 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:02.289 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:02.560 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:02.560 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:02.560 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:02.560 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:02.560 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:27:02.560 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:02.560 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:27:02.560 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:02.560 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:02.560 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.560 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.560 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.560 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:02.560 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:02.821 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:02.821 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:02.821 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:02.821 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:02.821 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:27:02.821 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:02.821 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:27:02.821 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:02.821 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:02.821 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.821 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.821 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.821 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:02.821 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:03.080 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:03.080 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:03.080 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:03.080 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:03.080 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:27:03.080 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:03.080 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:27:03.080 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:03.080 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:03.080 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.080 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.080 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.080 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:03.080 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:03.080 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:03.339 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:03.339 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:03.340 rmmod nvme_tcp 00:27:03.340 rmmod nvme_fabrics 00:27:03.340 rmmod nvme_keyring 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 3025662 ']' 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 3025662 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 3025662 ']' 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 3025662 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3025662 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3025662' 00:27:03.340 killing process with pid 3025662 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 3025662 00:27:03.340 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 3025662 00:27:06.631 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:06.631 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:06.631 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:06.631 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:27:06.631 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:27:06.631 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:06.631 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:27:06.631 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:06.631 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:06.631 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.631 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:06.632 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.540 01:04:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:08.540 00:27:08.540 real 1m5.964s 00:27:08.540 user 3m50.393s 00:27:08.540 sys 0m17.390s 00:27:08.540 01:04:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:08.540 01:04:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.540 ************************************ 00:27:08.540 END TEST nvmf_multiconnection 00:27:08.540 ************************************ 00:27:08.540 01:04:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:08.540 01:04:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:08.540 01:04:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:08.540 01:04:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:08.540 ************************************ 00:27:08.540 START TEST nvmf_initiator_timeout 00:27:08.540 ************************************ 00:27:08.540 01:04:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:08.540 * Looking for test storage... 00:27:08.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:08.540 01:04:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:08.540 01:04:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:27:08.540 01:04:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:08.540 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:08.540 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:08.540 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:08.540 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:08.540 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:27:08.540 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:08.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.541 --rc genhtml_branch_coverage=1 00:27:08.541 --rc genhtml_function_coverage=1 00:27:08.541 --rc genhtml_legend=1 00:27:08.541 --rc geninfo_all_blocks=1 00:27:08.541 --rc geninfo_unexecuted_blocks=1 00:27:08.541 00:27:08.541 ' 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:08.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.541 --rc genhtml_branch_coverage=1 00:27:08.541 --rc genhtml_function_coverage=1 00:27:08.541 --rc genhtml_legend=1 00:27:08.541 --rc geninfo_all_blocks=1 00:27:08.541 --rc geninfo_unexecuted_blocks=1 00:27:08.541 00:27:08.541 ' 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:08.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.541 --rc genhtml_branch_coverage=1 00:27:08.541 --rc genhtml_function_coverage=1 00:27:08.541 --rc genhtml_legend=1 00:27:08.541 --rc geninfo_all_blocks=1 00:27:08.541 --rc geninfo_unexecuted_blocks=1 00:27:08.541 00:27:08.541 ' 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:08.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.541 --rc genhtml_branch_coverage=1 00:27:08.541 --rc genhtml_function_coverage=1 00:27:08.541 --rc genhtml_legend=1 00:27:08.541 --rc geninfo_all_blocks=1 00:27:08.541 --rc geninfo_unexecuted_blocks=1 00:27:08.541 00:27:08.541 ' 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.541 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:08.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:27:08.542 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:10.449 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:10.449 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:27:10.449 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:10.449 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:10.449 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:10.449 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:10.449 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:10.449 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:27:10.449 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:10.449 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:27:10.449 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:27:10.449 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:27:10.449 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:27:10.449 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:27:10.449 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:10.450 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:10.450 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:10.450 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:10.450 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.450 01:04:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:10.450 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:10.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:27:10.450 00:27:10.450 --- 10.0.0.2 ping statistics --- 00:27:10.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.450 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:27:10.450 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:27:10.450 00:27:10.450 --- 10.0.0.1 ping statistics --- 00:27:10.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.450 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:27:10.450 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.450 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:27:10.450 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:10.450 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.450 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:10.450 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:10.450 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.450 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:10.451 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:10.451 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:10.451 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:10.451 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:10.451 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:10.451 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=3034431 00:27:10.451 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:10.451 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 3034431 00:27:10.451 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 3034431 ']' 00:27:10.451 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.451 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.451 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.451 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.451 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:10.451 [2024-12-06 01:04:43.128058] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:27:10.451 [2024-12-06 01:04:43.128220] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.711 [2024-12-06 01:04:43.272140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:10.711 [2024-12-06 01:04:43.395978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.711 [2024-12-06 01:04:43.396082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.711 [2024-12-06 01:04:43.396104] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.711 [2024-12-06 01:04:43.396124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.711 [2024-12-06 01:04:43.396156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.711 [2024-12-06 01:04:43.398640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.711 [2024-12-06 01:04:43.398710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:10.711 [2024-12-06 01:04:43.398755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.711 [2024-12-06 01:04:43.398764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.647 Malloc0 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.647 Delay0 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.647 [2024-12-06 01:04:44.242189] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.647 [2024-12-06 01:04:44.272061] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.647 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:12.211 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:12.211 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:27:12.211 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:12.211 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:12.211 01:04:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:27:14.739 01:04:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:14.739 01:04:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:14.739 01:04:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:14.739 01:04:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:14.739 01:04:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:14.739 01:04:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:27:14.739 01:04:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3034985 00:27:14.739 01:04:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:14.739 01:04:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:14.739 [global] 00:27:14.739 thread=1 00:27:14.739 invalidate=1 00:27:14.739 rw=write 00:27:14.739 time_based=1 00:27:14.739 runtime=60 00:27:14.739 ioengine=libaio 00:27:14.739 direct=1 00:27:14.739 bs=4096 00:27:14.739 iodepth=1 00:27:14.739 norandommap=0 00:27:14.739 numjobs=1 00:27:14.739 00:27:14.739 verify_dump=1 00:27:14.739 verify_backlog=512 00:27:14.739 verify_state_save=0 00:27:14.739 do_verify=1 00:27:14.739 verify=crc32c-intel 00:27:14.739 [job0] 00:27:14.739 filename=/dev/nvme0n1 00:27:14.739 Could not set queue depth (nvme0n1) 00:27:14.739 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:14.739 fio-3.35 00:27:14.739 Starting 1 thread 00:27:17.277 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:17.277 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.277 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.277 true 00:27:17.277 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.277 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:17.277 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.277 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.277 true 00:27:17.277 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.277 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:17.277 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.277 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.277 true 00:27:17.277 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.277 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:17.277 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.277 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.277 true 00:27:17.277 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.277 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:20.564 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:20.564 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.564 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:20.564 true 00:27:20.564 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.564 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:20.564 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.564 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:20.564 true 00:27:20.564 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.564 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:20.564 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.564 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:20.564 true 00:27:20.564 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.564 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:20.564 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.564 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:20.565 true 00:27:20.565 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.565 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:20.565 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3034985 00:28:16.829 00:28:16.829 job0: (groupid=0, jobs=1): err= 0: pid=3035054: Fri Dec 6 01:05:47 2024 00:28:16.829 read: IOPS=85, BW=341KiB/s (349kB/s)(20.0MiB/60035msec) 00:28:16.829 slat (nsec): min=5438, max=67579, avg=13478.24, stdev=7753.53 00:28:16.829 clat (usec): min=259, max=40796k, avg=11435.08, stdev=570534.20 00:28:16.829 lat (usec): min=265, max=40796k, avg=11448.56, stdev=570534.55 00:28:16.829 clat percentiles (usec): 00:28:16.829 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 289], 00:28:16.829 | 20.00th=[ 297], 30.00th=[ 310], 40.00th=[ 318], 00:28:16.829 | 50.00th=[ 347], 60.00th=[ 367], 70.00th=[ 375], 00:28:16.829 | 80.00th=[ 388], 90.00th=[ 416], 95.00th=[ 41157], 00:28:16.829 | 99.00th=[ 41157], 99.50th=[ 41681], 99.90th=[ 42206], 00:28:16.829 | 99.95th=[ 42206], 99.99th=[17112761] 00:28:16.829 write: IOPS=85, BW=341KiB/s (349kB/s)(20.0MiB/60035msec); 0 zone resets 00:28:16.829 slat (usec): min=7, max=28182, avg=26.27, stdev=438.05 00:28:16.829 clat (usec): min=198, max=670, avg=255.63, stdev=39.20 00:28:16.829 lat (usec): min=206, max=28468, avg=281.90, stdev=441.55 00:28:16.829 clat percentiles (usec): 00:28:16.829 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 227], 00:28:16.829 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 258], 00:28:16.829 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 334], 00:28:16.829 | 99.00th=[ 416], 99.50th=[ 449], 99.90th=[ 486], 99.95th=[ 515], 00:28:16.829 | 99.99th=[ 668] 00:28:16.829 bw ( KiB/s): min= 1080, max= 7496, per=100.00%, avg=5120.00, stdev=2176.47, samples=8 00:28:16.829 iops : min= 270, max= 1874, avg=1280.00, stdev=544.12, samples=8 00:28:16.829 lat (usec) : 250=25.79%, 500=70.28%, 750=0.09%, 1000=0.01% 00:28:16.829 lat (msec) : 50=3.83%, >=2000=0.01% 00:28:16.829 cpu : usr=0.21%, sys=0.33%, ctx=10238, majf=0, minf=1 00:28:16.829 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:16.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.829 issued rwts: total=5114,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:16.829 00:28:16.829 Run status group 0 (all jobs): 00:28:16.829 READ: bw=341KiB/s (349kB/s), 341KiB/s-341KiB/s (349kB/s-349kB/s), io=20.0MiB (20.9MB), run=60035-60035msec 00:28:16.829 WRITE: bw=341KiB/s (349kB/s), 341KiB/s-341KiB/s (349kB/s-349kB/s), io=20.0MiB (21.0MB), run=60035-60035msec 00:28:16.829 00:28:16.829 Disk stats (read/write): 00:28:16.829 nvme0n1: ios=5162/5120, merge=0/0, ticks=18771/1253, in_queue=20024, util=99.81% 00:28:16.829 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:16.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:16.829 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:16.829 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:28:16.829 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:16.829 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:16.829 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:16.829 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:16.829 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:28:16.829 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:16.829 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:16.829 nvmf hotplug test: fio successful as expected 00:28:16.829 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:16.829 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.829 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:16.829 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.829 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:16.830 rmmod nvme_tcp 00:28:16.830 rmmod nvme_fabrics 00:28:16.830 rmmod nvme_keyring 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 3034431 ']' 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 3034431 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 3034431 ']' 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 3034431 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3034431 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3034431' 00:28:16.830 killing process with pid 3034431 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 3034431 00:28:16.830 01:05:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 3034431 00:28:16.830 01:05:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:16.830 01:05:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:16.830 01:05:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:16.830 01:05:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:16.830 01:05:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:28:16.830 01:05:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:16.830 01:05:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:28:16.830 01:05:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:16.830 01:05:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:16.830 01:05:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.830 01:05:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.830 01:05:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.211 01:05:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:18.211 00:28:18.211 real 1m9.934s 00:28:18.211 user 4m16.280s 00:28:18.211 sys 0m6.781s 00:28:18.211 01:05:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:18.211 01:05:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:18.211 ************************************ 00:28:18.211 END TEST nvmf_initiator_timeout 00:28:18.211 ************************************ 00:28:18.211 01:05:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:18.211 01:05:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:18.212 01:05:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:18.212 01:05:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:18.212 01:05:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:20.743 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:20.744 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:20.744 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:20.744 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:20.744 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:20.744 ************************************ 00:28:20.744 START TEST nvmf_perf_adq 00:28:20.744 ************************************ 00:28:20.744 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:20.744 * Looking for test storage... 00:28:20.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:20.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.744 --rc genhtml_branch_coverage=1 00:28:20.744 --rc genhtml_function_coverage=1 00:28:20.744 --rc genhtml_legend=1 00:28:20.744 --rc geninfo_all_blocks=1 00:28:20.744 --rc geninfo_unexecuted_blocks=1 00:28:20.744 00:28:20.744 ' 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:20.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.744 --rc genhtml_branch_coverage=1 00:28:20.744 --rc genhtml_function_coverage=1 00:28:20.744 --rc genhtml_legend=1 00:28:20.744 --rc geninfo_all_blocks=1 00:28:20.744 --rc geninfo_unexecuted_blocks=1 00:28:20.744 00:28:20.744 ' 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:20.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.744 --rc genhtml_branch_coverage=1 00:28:20.744 --rc genhtml_function_coverage=1 00:28:20.744 --rc genhtml_legend=1 00:28:20.744 --rc geninfo_all_blocks=1 00:28:20.744 --rc geninfo_unexecuted_blocks=1 00:28:20.744 00:28:20.744 ' 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:20.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.744 --rc genhtml_branch_coverage=1 00:28:20.744 --rc genhtml_function_coverage=1 00:28:20.744 --rc genhtml_legend=1 00:28:20.744 --rc geninfo_all_blocks=1 00:28:20.744 --rc geninfo_unexecuted_blocks=1 00:28:20.744 00:28:20.744 ' 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.744 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:20.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:20.745 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:22.655 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.655 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:22.655 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:22.656 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:22.656 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:22.656 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:23.220 01:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:25.755 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:31.065 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:31.066 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:31.066 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:31.066 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:31.066 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:31.066 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:31.066 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:31.066 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:31.066 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:31.066 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:31.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:31.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:28:31.066 00:28:31.066 --- 10.0.0.2 ping statistics --- 00:28:31.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.066 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:28:31.066 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:31.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:31.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:28:31.066 00:28:31.066 --- 10.0.0.1 ping statistics --- 00:28:31.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.066 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:28:31.066 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:31.066 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:31.066 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:31.066 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:31.066 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:31.066 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:31.066 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:31.066 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:31.066 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:31.066 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:31.066 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:31.066 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:31.066 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.066 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3046833 00:28:31.067 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:31.067 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3046833 00:28:31.067 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3046833 ']' 00:28:31.067 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.067 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:31.067 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.067 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:31.067 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.067 [2024-12-06 01:06:03.152077] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:28:31.067 [2024-12-06 01:06:03.152231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:31.067 [2024-12-06 01:06:03.322410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:31.067 [2024-12-06 01:06:03.468065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:31.067 [2024-12-06 01:06:03.468151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:31.067 [2024-12-06 01:06:03.468178] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:31.067 [2024-12-06 01:06:03.468202] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:31.067 [2024-12-06 01:06:03.468221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:31.067 [2024-12-06 01:06:03.471024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.067 [2024-12-06 01:06:03.471075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:31.067 [2024-12-06 01:06:03.474856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:31.067 [2024-12-06 01:06:03.474867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.635 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:31.635 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:31.635 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:31.635 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:31.635 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.635 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:31.635 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:31.635 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:31.635 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:31.635 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.635 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.635 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.635 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:31.635 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:31.635 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.635 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.635 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.635 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:31.635 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.635 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.895 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.895 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:31.895 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.895 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.895 [2024-12-06 01:06:04.599784] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.155 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.155 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:32.155 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.155 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.155 Malloc1 00:28:32.155 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.155 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:32.155 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.155 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.155 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.155 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:32.155 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.155 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.155 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.155 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:32.155 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.155 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.155 [2024-12-06 01:06:04.721001] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.155 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.155 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3046991 00:28:32.155 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:32.155 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:34.055 01:06:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:34.055 01:06:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.055 01:06:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.055 01:06:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.055 01:06:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:34.055 "tick_rate": 2700000000, 00:28:34.055 "poll_groups": [ 00:28:34.055 { 00:28:34.055 "name": "nvmf_tgt_poll_group_000", 00:28:34.055 "admin_qpairs": 1, 00:28:34.055 "io_qpairs": 1, 00:28:34.055 "current_admin_qpairs": 1, 00:28:34.055 "current_io_qpairs": 1, 00:28:34.055 "pending_bdev_io": 0, 00:28:34.055 "completed_nvme_io": 17127, 00:28:34.055 "transports": [ 00:28:34.055 { 00:28:34.055 "trtype": "TCP" 00:28:34.055 } 00:28:34.055 ] 00:28:34.055 }, 00:28:34.055 { 00:28:34.055 "name": "nvmf_tgt_poll_group_001", 00:28:34.055 "admin_qpairs": 0, 00:28:34.055 "io_qpairs": 1, 00:28:34.055 "current_admin_qpairs": 0, 00:28:34.055 "current_io_qpairs": 1, 00:28:34.055 "pending_bdev_io": 0, 00:28:34.055 "completed_nvme_io": 15941, 00:28:34.055 "transports": [ 00:28:34.055 { 00:28:34.055 "trtype": "TCP" 00:28:34.055 } 00:28:34.055 ] 00:28:34.055 }, 00:28:34.055 { 00:28:34.055 "name": "nvmf_tgt_poll_group_002", 00:28:34.055 "admin_qpairs": 0, 00:28:34.055 "io_qpairs": 1, 00:28:34.055 "current_admin_qpairs": 0, 00:28:34.055 "current_io_qpairs": 1, 00:28:34.055 "pending_bdev_io": 0, 00:28:34.055 "completed_nvme_io": 17623, 00:28:34.055 "transports": [ 00:28:34.055 { 00:28:34.055 "trtype": "TCP" 00:28:34.055 } 00:28:34.055 ] 00:28:34.055 }, 00:28:34.055 { 00:28:34.055 "name": "nvmf_tgt_poll_group_003", 00:28:34.055 "admin_qpairs": 0, 00:28:34.056 "io_qpairs": 1, 00:28:34.056 "current_admin_qpairs": 0, 00:28:34.056 "current_io_qpairs": 1, 00:28:34.056 "pending_bdev_io": 0, 00:28:34.056 "completed_nvme_io": 16915, 00:28:34.056 "transports": [ 00:28:34.056 { 00:28:34.056 "trtype": "TCP" 00:28:34.056 } 00:28:34.056 ] 00:28:34.056 } 00:28:34.056 ] 00:28:34.056 }' 00:28:34.056 01:06:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:34.056 01:06:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:34.314 01:06:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:34.314 01:06:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:34.314 01:06:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3046991 00:28:42.521 Initializing NVMe Controllers 00:28:42.521 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:42.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:42.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:42.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:42.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:42.521 Initialization complete. Launching workers. 00:28:42.521 ======================================================== 00:28:42.521 Latency(us) 00:28:42.521 Device Information : IOPS MiB/s Average min max 00:28:42.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9103.20 35.56 7033.30 3171.34 11480.52 00:28:42.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8524.70 33.30 7510.96 3196.79 12472.09 00:28:42.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9374.20 36.62 6828.54 2876.48 10868.76 00:28:42.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8990.90 35.12 7119.75 2620.90 11052.90 00:28:42.521 ======================================================== 00:28:42.521 Total : 35992.99 140.60 7114.70 2620.90 12472.09 00:28:42.521 00:28:42.521 01:06:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:42.521 01:06:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:42.521 01:06:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:42.522 01:06:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:42.522 01:06:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:42.522 01:06:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:42.522 01:06:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:42.522 rmmod nvme_tcp 00:28:42.522 rmmod nvme_fabrics 00:28:42.522 rmmod nvme_keyring 00:28:42.522 01:06:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:42.522 01:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:42.522 01:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:42.522 01:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3046833 ']' 00:28:42.522 01:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3046833 00:28:42.522 01:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3046833 ']' 00:28:42.522 01:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3046833 00:28:42.522 01:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:42.522 01:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:42.522 01:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3046833 00:28:42.522 01:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:42.522 01:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:42.522 01:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3046833' 00:28:42.522 killing process with pid 3046833 00:28:42.522 01:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3046833 00:28:42.522 01:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3046833 00:28:43.898 01:06:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:43.898 01:06:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:43.898 01:06:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:43.898 01:06:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:43.898 01:06:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:43.898 01:06:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:43.898 01:06:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:43.898 01:06:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:43.898 01:06:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:43.898 01:06:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.898 01:06:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.898 01:06:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.801 01:06:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:45.801 01:06:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:45.801 01:06:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:45.801 01:06:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:46.743 01:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:49.277 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:54.568 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:54.569 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:54.569 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:54.569 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:54.569 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:54.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:54.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:28:54.569 00:28:54.569 --- 10.0.0.2 ping statistics --- 00:28:54.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.569 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:54.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:54.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:28:54.569 00:28:54.569 --- 10.0.0.1 ping statistics --- 00:28:54.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.569 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:54.569 net.core.busy_poll = 1 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:54.569 net.core.busy_read = 1 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3050359 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3050359 00:28:54.569 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3050359 ']' 00:28:54.570 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.570 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:54.570 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.570 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:54.570 01:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.570 [2024-12-06 01:06:26.933682] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:28:54.570 [2024-12-06 01:06:26.933822] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:54.570 [2024-12-06 01:06:27.086935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:54.570 [2024-12-06 01:06:27.230127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:54.570 [2024-12-06 01:06:27.230216] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:54.570 [2024-12-06 01:06:27.230242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:54.570 [2024-12-06 01:06:27.230267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:54.570 [2024-12-06 01:06:27.230288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:54.570 [2024-12-06 01:06:27.233100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.570 [2024-12-06 01:06:27.233175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:54.570 [2024-12-06 01:06:27.233271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.570 [2024-12-06 01:06:27.233278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:55.504 01:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:55.504 01:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:55.504 01:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:55.504 01:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:55.504 01:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.504 01:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.504 01:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:55.504 01:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:55.504 01:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:55.504 01:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.504 01:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.504 01:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.504 01:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:55.504 01:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:55.504 01:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.504 01:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.504 01:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.504 01:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:55.504 01:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.504 01:06:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.762 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.762 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:55.762 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.762 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.762 [2024-12-06 01:06:28.313366] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.762 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.762 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:55.762 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.762 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.763 Malloc1 00:28:55.763 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.763 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:55.763 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.763 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.763 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.763 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:55.763 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.763 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.763 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.763 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:55.763 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.763 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.763 [2024-12-06 01:06:28.428775] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.763 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.763 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3050527 00:28:55.763 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:55.763 01:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:58.295 01:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:58.295 01:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.295 01:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:58.295 01:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.295 01:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:58.295 "tick_rate": 2700000000, 00:28:58.295 "poll_groups": [ 00:28:58.295 { 00:28:58.295 "name": "nvmf_tgt_poll_group_000", 00:28:58.295 "admin_qpairs": 1, 00:28:58.295 "io_qpairs": 2, 00:28:58.295 "current_admin_qpairs": 1, 00:28:58.295 "current_io_qpairs": 2, 00:28:58.295 "pending_bdev_io": 0, 00:28:58.295 "completed_nvme_io": 19341, 00:28:58.295 "transports": [ 00:28:58.295 { 00:28:58.295 "trtype": "TCP" 00:28:58.295 } 00:28:58.295 ] 00:28:58.295 }, 00:28:58.295 { 00:28:58.295 "name": "nvmf_tgt_poll_group_001", 00:28:58.295 "admin_qpairs": 0, 00:28:58.295 "io_qpairs": 2, 00:28:58.295 "current_admin_qpairs": 0, 00:28:58.295 "current_io_qpairs": 2, 00:28:58.295 "pending_bdev_io": 0, 00:28:58.295 "completed_nvme_io": 19106, 00:28:58.295 "transports": [ 00:28:58.295 { 00:28:58.295 "trtype": "TCP" 00:28:58.295 } 00:28:58.295 ] 00:28:58.295 }, 00:28:58.295 { 00:28:58.295 "name": "nvmf_tgt_poll_group_002", 00:28:58.295 "admin_qpairs": 0, 00:28:58.295 "io_qpairs": 0, 00:28:58.295 "current_admin_qpairs": 0, 00:28:58.295 "current_io_qpairs": 0, 00:28:58.295 "pending_bdev_io": 0, 00:28:58.295 "completed_nvme_io": 0, 00:28:58.295 "transports": [ 00:28:58.295 { 00:28:58.295 "trtype": "TCP" 00:28:58.295 } 00:28:58.295 ] 00:28:58.295 }, 00:28:58.295 { 00:28:58.295 "name": "nvmf_tgt_poll_group_003", 00:28:58.295 "admin_qpairs": 0, 00:28:58.295 "io_qpairs": 0, 00:28:58.295 "current_admin_qpairs": 0, 00:28:58.295 "current_io_qpairs": 0, 00:28:58.295 "pending_bdev_io": 0, 00:28:58.295 "completed_nvme_io": 0, 00:28:58.295 "transports": [ 00:28:58.295 { 00:28:58.295 "trtype": "TCP" 00:28:58.295 } 00:28:58.295 ] 00:28:58.295 } 00:28:58.295 ] 00:28:58.295 }' 00:28:58.295 01:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:58.295 01:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:58.295 01:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:58.295 01:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:58.295 01:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3050527 00:29:06.404 Initializing NVMe Controllers 00:29:06.404 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:06.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:06.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:06.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:06.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:06.404 Initialization complete. Launching workers. 00:29:06.404 ======================================================== 00:29:06.404 Latency(us) 00:29:06.404 Device Information : IOPS MiB/s Average min max 00:29:06.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4972.10 19.42 12882.00 2243.02 57116.43 00:29:06.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5636.60 22.02 11362.55 2186.05 58794.41 00:29:06.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5884.60 22.99 10886.56 2244.55 57687.48 00:29:06.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4427.30 17.29 14468.67 1726.87 59241.60 00:29:06.404 ======================================================== 00:29:06.404 Total : 20920.60 81.72 12247.11 1726.87 59241.60 00:29:06.404 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:06.404 rmmod nvme_tcp 00:29:06.404 rmmod nvme_fabrics 00:29:06.404 rmmod nvme_keyring 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3050359 ']' 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3050359 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3050359 ']' 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3050359 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3050359 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3050359' 00:29:06.404 killing process with pid 3050359 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3050359 00:29:06.404 01:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3050359 00:29:07.777 01:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:07.777 01:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:07.777 01:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:07.777 01:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:07.778 01:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:29:07.778 01:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:07.778 01:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:29:07.778 01:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:07.778 01:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:07.778 01:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.778 01:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.778 01:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:29:09.682 00:29:09.682 real 0m49.184s 00:29:09.682 user 2m54.968s 00:29:09.682 sys 0m9.339s 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:09.682 ************************************ 00:29:09.682 END TEST nvmf_perf_adq 00:29:09.682 ************************************ 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:09.682 ************************************ 00:29:09.682 START TEST nvmf_shutdown 00:29:09.682 ************************************ 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:09.682 * Looking for test storage... 00:29:09.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:09.682 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:09.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.683 --rc genhtml_branch_coverage=1 00:29:09.683 --rc genhtml_function_coverage=1 00:29:09.683 --rc genhtml_legend=1 00:29:09.683 --rc geninfo_all_blocks=1 00:29:09.683 --rc geninfo_unexecuted_blocks=1 00:29:09.683 00:29:09.683 ' 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:09.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.683 --rc genhtml_branch_coverage=1 00:29:09.683 --rc genhtml_function_coverage=1 00:29:09.683 --rc genhtml_legend=1 00:29:09.683 --rc geninfo_all_blocks=1 00:29:09.683 --rc geninfo_unexecuted_blocks=1 00:29:09.683 00:29:09.683 ' 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:09.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.683 --rc genhtml_branch_coverage=1 00:29:09.683 --rc genhtml_function_coverage=1 00:29:09.683 --rc genhtml_legend=1 00:29:09.683 --rc geninfo_all_blocks=1 00:29:09.683 --rc geninfo_unexecuted_blocks=1 00:29:09.683 00:29:09.683 ' 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:09.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.683 --rc genhtml_branch_coverage=1 00:29:09.683 --rc genhtml_function_coverage=1 00:29:09.683 --rc genhtml_legend=1 00:29:09.683 --rc geninfo_all_blocks=1 00:29:09.683 --rc geninfo_unexecuted_blocks=1 00:29:09.683 00:29:09.683 ' 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:09.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:09.683 ************************************ 00:29:09.683 START TEST nvmf_shutdown_tc1 00:29:09.683 ************************************ 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:09.683 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:09.684 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.684 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.684 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.684 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:09.684 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:09.684 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:09.684 01:06:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:12.218 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:12.218 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:12.218 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:12.218 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:12.218 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:12.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:12.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:29:12.219 00:29:12.219 --- 10.0.0.2 ping statistics --- 00:29:12.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.219 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:12.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:12.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:29:12.219 00:29:12.219 --- 10.0.0.1 ping statistics --- 00:29:12.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.219 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3053825 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3053825 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3053825 ']' 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:12.219 01:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:12.219 [2024-12-06 01:06:44.698530] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:29:12.219 [2024-12-06 01:06:44.698665] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.219 [2024-12-06 01:06:44.859307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:12.477 [2024-12-06 01:06:45.002822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.477 [2024-12-06 01:06:45.002928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.477 [2024-12-06 01:06:45.002955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:12.477 [2024-12-06 01:06:45.002979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:12.477 [2024-12-06 01:06:45.003005] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.477 [2024-12-06 01:06:45.005929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:12.477 [2024-12-06 01:06:45.006041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:12.477 [2024-12-06 01:06:45.006087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.477 [2024-12-06 01:06:45.006093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:13.042 [2024-12-06 01:06:45.690286] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.042 01:06:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:13.299 Malloc1 00:29:13.299 [2024-12-06 01:06:45.828076] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.299 Malloc2 00:29:13.299 Malloc3 00:29:13.556 Malloc4 00:29:13.556 Malloc5 00:29:13.814 Malloc6 00:29:13.814 Malloc7 00:29:13.814 Malloc8 00:29:14.072 Malloc9 00:29:14.073 Malloc10 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3054134 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3054134 /var/tmp/bdevperf.sock 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3054134 ']' 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:14.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.073 { 00:29:14.073 "params": { 00:29:14.073 "name": "Nvme$subsystem", 00:29:14.073 "trtype": "$TEST_TRANSPORT", 00:29:14.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.073 "adrfam": "ipv4", 00:29:14.073 "trsvcid": "$NVMF_PORT", 00:29:14.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.073 "hdgst": ${hdgst:-false}, 00:29:14.073 "ddgst": ${ddgst:-false} 00:29:14.073 }, 00:29:14.073 "method": "bdev_nvme_attach_controller" 00:29:14.073 } 00:29:14.073 EOF 00:29:14.073 )") 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.073 { 00:29:14.073 "params": { 00:29:14.073 "name": "Nvme$subsystem", 00:29:14.073 "trtype": "$TEST_TRANSPORT", 00:29:14.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.073 "adrfam": "ipv4", 00:29:14.073 "trsvcid": "$NVMF_PORT", 00:29:14.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.073 "hdgst": ${hdgst:-false}, 00:29:14.073 "ddgst": ${ddgst:-false} 00:29:14.073 }, 00:29:14.073 "method": "bdev_nvme_attach_controller" 00:29:14.073 } 00:29:14.073 EOF 00:29:14.073 )") 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.073 { 00:29:14.073 "params": { 00:29:14.073 "name": "Nvme$subsystem", 00:29:14.073 "trtype": "$TEST_TRANSPORT", 00:29:14.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.073 "adrfam": "ipv4", 00:29:14.073 "trsvcid": "$NVMF_PORT", 00:29:14.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.073 "hdgst": ${hdgst:-false}, 00:29:14.073 "ddgst": ${ddgst:-false} 00:29:14.073 }, 00:29:14.073 "method": "bdev_nvme_attach_controller" 00:29:14.073 } 00:29:14.073 EOF 00:29:14.073 )") 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.073 { 00:29:14.073 "params": { 00:29:14.073 "name": "Nvme$subsystem", 00:29:14.073 "trtype": "$TEST_TRANSPORT", 00:29:14.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.073 "adrfam": "ipv4", 00:29:14.073 "trsvcid": "$NVMF_PORT", 00:29:14.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.073 "hdgst": ${hdgst:-false}, 00:29:14.073 "ddgst": ${ddgst:-false} 00:29:14.073 }, 00:29:14.073 "method": "bdev_nvme_attach_controller" 00:29:14.073 } 00:29:14.073 EOF 00:29:14.073 )") 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.073 { 00:29:14.073 "params": { 00:29:14.073 "name": "Nvme$subsystem", 00:29:14.073 "trtype": "$TEST_TRANSPORT", 00:29:14.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.073 "adrfam": "ipv4", 00:29:14.073 "trsvcid": "$NVMF_PORT", 00:29:14.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.073 "hdgst": ${hdgst:-false}, 00:29:14.073 "ddgst": ${ddgst:-false} 00:29:14.073 }, 00:29:14.073 "method": "bdev_nvme_attach_controller" 00:29:14.073 } 00:29:14.073 EOF 00:29:14.073 )") 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.073 { 00:29:14.073 "params": { 00:29:14.073 "name": "Nvme$subsystem", 00:29:14.073 "trtype": "$TEST_TRANSPORT", 00:29:14.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.073 "adrfam": "ipv4", 00:29:14.073 "trsvcid": "$NVMF_PORT", 00:29:14.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.073 "hdgst": ${hdgst:-false}, 00:29:14.073 "ddgst": ${ddgst:-false} 00:29:14.073 }, 00:29:14.073 "method": "bdev_nvme_attach_controller" 00:29:14.073 } 00:29:14.073 EOF 00:29:14.073 )") 00:29:14.073 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:14.332 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.332 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.332 { 00:29:14.332 "params": { 00:29:14.332 "name": "Nvme$subsystem", 00:29:14.332 "trtype": "$TEST_TRANSPORT", 00:29:14.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.332 "adrfam": "ipv4", 00:29:14.332 "trsvcid": "$NVMF_PORT", 00:29:14.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.332 "hdgst": ${hdgst:-false}, 00:29:14.332 "ddgst": ${ddgst:-false} 00:29:14.332 }, 00:29:14.332 "method": "bdev_nvme_attach_controller" 00:29:14.332 } 00:29:14.332 EOF 00:29:14.332 )") 00:29:14.332 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:14.332 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.332 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.332 { 00:29:14.332 "params": { 00:29:14.332 "name": "Nvme$subsystem", 00:29:14.332 "trtype": "$TEST_TRANSPORT", 00:29:14.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.332 "adrfam": "ipv4", 00:29:14.332 "trsvcid": "$NVMF_PORT", 00:29:14.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.332 "hdgst": ${hdgst:-false}, 00:29:14.332 "ddgst": ${ddgst:-false} 00:29:14.332 }, 00:29:14.332 "method": "bdev_nvme_attach_controller" 00:29:14.332 } 00:29:14.332 EOF 00:29:14.332 )") 00:29:14.332 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:14.332 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.332 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.332 { 00:29:14.332 "params": { 00:29:14.332 "name": "Nvme$subsystem", 00:29:14.332 "trtype": "$TEST_TRANSPORT", 00:29:14.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.332 "adrfam": "ipv4", 00:29:14.332 "trsvcid": "$NVMF_PORT", 00:29:14.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.332 "hdgst": ${hdgst:-false}, 00:29:14.332 "ddgst": ${ddgst:-false} 00:29:14.332 }, 00:29:14.332 "method": "bdev_nvme_attach_controller" 00:29:14.332 } 00:29:14.332 EOF 00:29:14.332 )") 00:29:14.332 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:14.332 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.332 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.332 { 00:29:14.332 "params": { 00:29:14.332 "name": "Nvme$subsystem", 00:29:14.332 "trtype": "$TEST_TRANSPORT", 00:29:14.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.332 "adrfam": "ipv4", 00:29:14.332 "trsvcid": "$NVMF_PORT", 00:29:14.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.332 "hdgst": ${hdgst:-false}, 00:29:14.332 "ddgst": ${ddgst:-false} 00:29:14.332 }, 00:29:14.332 "method": "bdev_nvme_attach_controller" 00:29:14.332 } 00:29:14.332 EOF 00:29:14.332 )") 00:29:14.332 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:14.332 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:14.332 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:14.332 01:06:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:14.332 "params": { 00:29:14.332 "name": "Nvme1", 00:29:14.332 "trtype": "tcp", 00:29:14.332 "traddr": "10.0.0.2", 00:29:14.332 "adrfam": "ipv4", 00:29:14.332 "trsvcid": "4420", 00:29:14.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:14.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:14.332 "hdgst": false, 00:29:14.332 "ddgst": false 00:29:14.332 }, 00:29:14.332 "method": "bdev_nvme_attach_controller" 00:29:14.332 },{ 00:29:14.332 "params": { 00:29:14.332 "name": "Nvme2", 00:29:14.332 "trtype": "tcp", 00:29:14.332 "traddr": "10.0.0.2", 00:29:14.332 "adrfam": "ipv4", 00:29:14.332 "trsvcid": "4420", 00:29:14.332 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:14.332 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:14.332 "hdgst": false, 00:29:14.332 "ddgst": false 00:29:14.332 }, 00:29:14.332 "method": "bdev_nvme_attach_controller" 00:29:14.332 },{ 00:29:14.332 "params": { 00:29:14.332 "name": "Nvme3", 00:29:14.332 "trtype": "tcp", 00:29:14.332 "traddr": "10.0.0.2", 00:29:14.332 "adrfam": "ipv4", 00:29:14.332 "trsvcid": "4420", 00:29:14.332 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:14.332 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:14.332 "hdgst": false, 00:29:14.332 "ddgst": false 00:29:14.332 }, 00:29:14.332 "method": "bdev_nvme_attach_controller" 00:29:14.332 },{ 00:29:14.332 "params": { 00:29:14.332 "name": "Nvme4", 00:29:14.332 "trtype": "tcp", 00:29:14.332 "traddr": "10.0.0.2", 00:29:14.332 "adrfam": "ipv4", 00:29:14.332 "trsvcid": "4420", 00:29:14.332 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:14.332 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:14.332 "hdgst": false, 00:29:14.332 "ddgst": false 00:29:14.332 }, 00:29:14.332 "method": "bdev_nvme_attach_controller" 00:29:14.332 },{ 00:29:14.332 "params": { 00:29:14.332 "name": "Nvme5", 00:29:14.332 "trtype": "tcp", 00:29:14.332 "traddr": "10.0.0.2", 00:29:14.332 "adrfam": "ipv4", 00:29:14.332 "trsvcid": "4420", 00:29:14.333 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:14.333 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:14.333 "hdgst": false, 00:29:14.333 "ddgst": false 00:29:14.333 }, 00:29:14.333 "method": "bdev_nvme_attach_controller" 00:29:14.333 },{ 00:29:14.333 "params": { 00:29:14.333 "name": "Nvme6", 00:29:14.333 "trtype": "tcp", 00:29:14.333 "traddr": "10.0.0.2", 00:29:14.333 "adrfam": "ipv4", 00:29:14.333 "trsvcid": "4420", 00:29:14.333 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:14.333 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:14.333 "hdgst": false, 00:29:14.333 "ddgst": false 00:29:14.333 }, 00:29:14.333 "method": "bdev_nvme_attach_controller" 00:29:14.333 },{ 00:29:14.333 "params": { 00:29:14.333 "name": "Nvme7", 00:29:14.333 "trtype": "tcp", 00:29:14.333 "traddr": "10.0.0.2", 00:29:14.333 "adrfam": "ipv4", 00:29:14.333 "trsvcid": "4420", 00:29:14.333 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:14.333 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:14.333 "hdgst": false, 00:29:14.333 "ddgst": false 00:29:14.333 }, 00:29:14.333 "method": "bdev_nvme_attach_controller" 00:29:14.333 },{ 00:29:14.333 "params": { 00:29:14.333 "name": "Nvme8", 00:29:14.333 "trtype": "tcp", 00:29:14.333 "traddr": "10.0.0.2", 00:29:14.333 "adrfam": "ipv4", 00:29:14.333 "trsvcid": "4420", 00:29:14.333 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:14.333 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:14.333 "hdgst": false, 00:29:14.333 "ddgst": false 00:29:14.333 }, 00:29:14.333 "method": "bdev_nvme_attach_controller" 00:29:14.333 },{ 00:29:14.333 "params": { 00:29:14.333 "name": "Nvme9", 00:29:14.333 "trtype": "tcp", 00:29:14.333 "traddr": "10.0.0.2", 00:29:14.333 "adrfam": "ipv4", 00:29:14.333 "trsvcid": "4420", 00:29:14.333 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:14.333 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:14.333 "hdgst": false, 00:29:14.333 "ddgst": false 00:29:14.333 }, 00:29:14.333 "method": "bdev_nvme_attach_controller" 00:29:14.333 },{ 00:29:14.333 "params": { 00:29:14.333 "name": "Nvme10", 00:29:14.333 "trtype": "tcp", 00:29:14.333 "traddr": "10.0.0.2", 00:29:14.333 "adrfam": "ipv4", 00:29:14.333 "trsvcid": "4420", 00:29:14.333 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:14.333 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:14.333 "hdgst": false, 00:29:14.333 "ddgst": false 00:29:14.333 }, 00:29:14.333 "method": "bdev_nvme_attach_controller" 00:29:14.333 }' 00:29:14.333 [2024-12-06 01:06:46.843506] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:29:14.333 [2024-12-06 01:06:46.843641] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:14.333 [2024-12-06 01:06:46.981598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.591 [2024-12-06 01:06:47.112895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.489 01:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:16.489 01:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:16.489 01:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:16.489 01:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.489 01:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:16.489 01:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.489 01:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3054134 00:29:16.489 01:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:29:16.489 01:06:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:29:17.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3054134 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:17.422 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3053825 00:29:17.422 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:17.422 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:17.422 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:17.422 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:17.422 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.422 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.422 { 00:29:17.422 "params": { 00:29:17.422 "name": "Nvme$subsystem", 00:29:17.422 "trtype": "$TEST_TRANSPORT", 00:29:17.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.422 "adrfam": "ipv4", 00:29:17.423 "trsvcid": "$NVMF_PORT", 00:29:17.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.423 "hdgst": ${hdgst:-false}, 00:29:17.423 "ddgst": ${ddgst:-false} 00:29:17.423 }, 00:29:17.423 "method": "bdev_nvme_attach_controller" 00:29:17.423 } 00:29:17.423 EOF 00:29:17.423 )") 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.423 { 00:29:17.423 "params": { 00:29:17.423 "name": "Nvme$subsystem", 00:29:17.423 "trtype": "$TEST_TRANSPORT", 00:29:17.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.423 "adrfam": "ipv4", 00:29:17.423 "trsvcid": "$NVMF_PORT", 00:29:17.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.423 "hdgst": ${hdgst:-false}, 00:29:17.423 "ddgst": ${ddgst:-false} 00:29:17.423 }, 00:29:17.423 "method": "bdev_nvme_attach_controller" 00:29:17.423 } 00:29:17.423 EOF 00:29:17.423 )") 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.423 { 00:29:17.423 "params": { 00:29:17.423 "name": "Nvme$subsystem", 00:29:17.423 "trtype": "$TEST_TRANSPORT", 00:29:17.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.423 "adrfam": "ipv4", 00:29:17.423 "trsvcid": "$NVMF_PORT", 00:29:17.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.423 "hdgst": ${hdgst:-false}, 00:29:17.423 "ddgst": ${ddgst:-false} 00:29:17.423 }, 00:29:17.423 "method": "bdev_nvme_attach_controller" 00:29:17.423 } 00:29:17.423 EOF 00:29:17.423 )") 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.423 { 00:29:17.423 "params": { 00:29:17.423 "name": "Nvme$subsystem", 00:29:17.423 "trtype": "$TEST_TRANSPORT", 00:29:17.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.423 "adrfam": "ipv4", 00:29:17.423 "trsvcid": "$NVMF_PORT", 00:29:17.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.423 "hdgst": ${hdgst:-false}, 00:29:17.423 "ddgst": ${ddgst:-false} 00:29:17.423 }, 00:29:17.423 "method": "bdev_nvme_attach_controller" 00:29:17.423 } 00:29:17.423 EOF 00:29:17.423 )") 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.423 { 00:29:17.423 "params": { 00:29:17.423 "name": "Nvme$subsystem", 00:29:17.423 "trtype": "$TEST_TRANSPORT", 00:29:17.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.423 "adrfam": "ipv4", 00:29:17.423 "trsvcid": "$NVMF_PORT", 00:29:17.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.423 "hdgst": ${hdgst:-false}, 00:29:17.423 "ddgst": ${ddgst:-false} 00:29:17.423 }, 00:29:17.423 "method": "bdev_nvme_attach_controller" 00:29:17.423 } 00:29:17.423 EOF 00:29:17.423 )") 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.423 { 00:29:17.423 "params": { 00:29:17.423 "name": "Nvme$subsystem", 00:29:17.423 "trtype": "$TEST_TRANSPORT", 00:29:17.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.423 "adrfam": "ipv4", 00:29:17.423 "trsvcid": "$NVMF_PORT", 00:29:17.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.423 "hdgst": ${hdgst:-false}, 00:29:17.423 "ddgst": ${ddgst:-false} 00:29:17.423 }, 00:29:17.423 "method": "bdev_nvme_attach_controller" 00:29:17.423 } 00:29:17.423 EOF 00:29:17.423 )") 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.423 { 00:29:17.423 "params": { 00:29:17.423 "name": "Nvme$subsystem", 00:29:17.423 "trtype": "$TEST_TRANSPORT", 00:29:17.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.423 "adrfam": "ipv4", 00:29:17.423 "trsvcid": "$NVMF_PORT", 00:29:17.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.423 "hdgst": ${hdgst:-false}, 00:29:17.423 "ddgst": ${ddgst:-false} 00:29:17.423 }, 00:29:17.423 "method": "bdev_nvme_attach_controller" 00:29:17.423 } 00:29:17.423 EOF 00:29:17.423 )") 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.423 { 00:29:17.423 "params": { 00:29:17.423 "name": "Nvme$subsystem", 00:29:17.423 "trtype": "$TEST_TRANSPORT", 00:29:17.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.423 "adrfam": "ipv4", 00:29:17.423 "trsvcid": "$NVMF_PORT", 00:29:17.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.423 "hdgst": ${hdgst:-false}, 00:29:17.423 "ddgst": ${ddgst:-false} 00:29:17.423 }, 00:29:17.423 "method": "bdev_nvme_attach_controller" 00:29:17.423 } 00:29:17.423 EOF 00:29:17.423 )") 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.423 { 00:29:17.423 "params": { 00:29:17.423 "name": "Nvme$subsystem", 00:29:17.423 "trtype": "$TEST_TRANSPORT", 00:29:17.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.423 "adrfam": "ipv4", 00:29:17.423 "trsvcid": "$NVMF_PORT", 00:29:17.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.423 "hdgst": ${hdgst:-false}, 00:29:17.423 "ddgst": ${ddgst:-false} 00:29:17.423 }, 00:29:17.423 "method": "bdev_nvme_attach_controller" 00:29:17.423 } 00:29:17.423 EOF 00:29:17.423 )") 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.423 { 00:29:17.423 "params": { 00:29:17.423 "name": "Nvme$subsystem", 00:29:17.423 "trtype": "$TEST_TRANSPORT", 00:29:17.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.423 "adrfam": "ipv4", 00:29:17.423 "trsvcid": "$NVMF_PORT", 00:29:17.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.423 "hdgst": ${hdgst:-false}, 00:29:17.423 "ddgst": ${ddgst:-false} 00:29:17.423 }, 00:29:17.423 "method": "bdev_nvme_attach_controller" 00:29:17.423 } 00:29:17.423 EOF 00:29:17.423 )") 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:17.423 01:06:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:17.423 "params": { 00:29:17.423 "name": "Nvme1", 00:29:17.423 "trtype": "tcp", 00:29:17.423 "traddr": "10.0.0.2", 00:29:17.423 "adrfam": "ipv4", 00:29:17.423 "trsvcid": "4420", 00:29:17.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:17.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:17.423 "hdgst": false, 00:29:17.423 "ddgst": false 00:29:17.423 }, 00:29:17.423 "method": "bdev_nvme_attach_controller" 00:29:17.423 },{ 00:29:17.423 "params": { 00:29:17.423 "name": "Nvme2", 00:29:17.423 "trtype": "tcp", 00:29:17.424 "traddr": "10.0.0.2", 00:29:17.424 "adrfam": "ipv4", 00:29:17.424 "trsvcid": "4420", 00:29:17.424 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:17.424 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:17.424 "hdgst": false, 00:29:17.424 "ddgst": false 00:29:17.424 }, 00:29:17.424 "method": "bdev_nvme_attach_controller" 00:29:17.424 },{ 00:29:17.424 "params": { 00:29:17.424 "name": "Nvme3", 00:29:17.424 "trtype": "tcp", 00:29:17.424 "traddr": "10.0.0.2", 00:29:17.424 "adrfam": "ipv4", 00:29:17.424 "trsvcid": "4420", 00:29:17.424 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:17.424 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:17.424 "hdgst": false, 00:29:17.424 "ddgst": false 00:29:17.424 }, 00:29:17.424 "method": "bdev_nvme_attach_controller" 00:29:17.424 },{ 00:29:17.424 "params": { 00:29:17.424 "name": "Nvme4", 00:29:17.424 "trtype": "tcp", 00:29:17.424 "traddr": "10.0.0.2", 00:29:17.424 "adrfam": "ipv4", 00:29:17.424 "trsvcid": "4420", 00:29:17.424 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:17.424 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:17.424 "hdgst": false, 00:29:17.424 "ddgst": false 00:29:17.424 }, 00:29:17.424 "method": "bdev_nvme_attach_controller" 00:29:17.424 },{ 00:29:17.424 "params": { 00:29:17.424 "name": "Nvme5", 00:29:17.424 "trtype": "tcp", 00:29:17.424 "traddr": "10.0.0.2", 00:29:17.424 "adrfam": "ipv4", 00:29:17.424 "trsvcid": "4420", 00:29:17.424 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:17.424 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:17.424 "hdgst": false, 00:29:17.424 "ddgst": false 00:29:17.424 }, 00:29:17.424 "method": "bdev_nvme_attach_controller" 00:29:17.424 },{ 00:29:17.424 "params": { 00:29:17.424 "name": "Nvme6", 00:29:17.424 "trtype": "tcp", 00:29:17.424 "traddr": "10.0.0.2", 00:29:17.424 "adrfam": "ipv4", 00:29:17.424 "trsvcid": "4420", 00:29:17.424 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:17.424 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:17.424 "hdgst": false, 00:29:17.424 "ddgst": false 00:29:17.424 }, 00:29:17.424 "method": "bdev_nvme_attach_controller" 00:29:17.424 },{ 00:29:17.424 "params": { 00:29:17.424 "name": "Nvme7", 00:29:17.424 "trtype": "tcp", 00:29:17.424 "traddr": "10.0.0.2", 00:29:17.424 "adrfam": "ipv4", 00:29:17.424 "trsvcid": "4420", 00:29:17.424 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:17.424 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:17.424 "hdgst": false, 00:29:17.424 "ddgst": false 00:29:17.424 }, 00:29:17.424 "method": "bdev_nvme_attach_controller" 00:29:17.424 },{ 00:29:17.424 "params": { 00:29:17.424 "name": "Nvme8", 00:29:17.424 "trtype": "tcp", 00:29:17.424 "traddr": "10.0.0.2", 00:29:17.424 "adrfam": "ipv4", 00:29:17.424 "trsvcid": "4420", 00:29:17.424 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:17.424 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:17.424 "hdgst": false, 00:29:17.424 "ddgst": false 00:29:17.424 }, 00:29:17.424 "method": "bdev_nvme_attach_controller" 00:29:17.424 },{ 00:29:17.424 "params": { 00:29:17.424 "name": "Nvme9", 00:29:17.424 "trtype": "tcp", 00:29:17.424 "traddr": "10.0.0.2", 00:29:17.424 "adrfam": "ipv4", 00:29:17.424 "trsvcid": "4420", 00:29:17.424 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:17.424 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:17.424 "hdgst": false, 00:29:17.424 "ddgst": false 00:29:17.424 }, 00:29:17.424 "method": "bdev_nvme_attach_controller" 00:29:17.424 },{ 00:29:17.424 "params": { 00:29:17.424 "name": "Nvme10", 00:29:17.424 "trtype": "tcp", 00:29:17.424 "traddr": "10.0.0.2", 00:29:17.424 "adrfam": "ipv4", 00:29:17.424 "trsvcid": "4420", 00:29:17.424 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:17.424 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:17.424 "hdgst": false, 00:29:17.424 "ddgst": false 00:29:17.424 }, 00:29:17.424 "method": "bdev_nvme_attach_controller" 00:29:17.424 }' 00:29:17.424 [2024-12-06 01:06:49.862557] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:29:17.424 [2024-12-06 01:06:49.862686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3054550 ] 00:29:17.424 [2024-12-06 01:06:50.003125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.683 [2024-12-06 01:06:50.133495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.581 Running I/O for 1 seconds... 00:29:20.771 1491.00 IOPS, 93.19 MiB/s 00:29:20.771 Latency(us) 00:29:20.771 [2024-12-06T00:06:53.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.771 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.771 Verification LBA range: start 0x0 length 0x400 00:29:20.771 Nvme1n1 : 1.07 182.47 11.40 0.00 0.00 344340.04 5024.43 310689.19 00:29:20.771 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.771 Verification LBA range: start 0x0 length 0x400 00:29:20.771 Nvme2n1 : 1.21 211.14 13.20 0.00 0.00 294470.16 20291.89 307582.29 00:29:20.771 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.771 Verification LBA range: start 0x0 length 0x400 00:29:20.771 Nvme3n1 : 1.20 212.45 13.28 0.00 0.00 285855.86 20680.25 299815.06 00:29:20.771 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.771 Verification LBA range: start 0x0 length 0x400 00:29:20.771 Nvme4n1 : 1.19 215.63 13.48 0.00 0.00 278897.78 21068.61 304475.40 00:29:20.771 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.771 Verification LBA range: start 0x0 length 0x400 00:29:20.771 Nvme5n1 : 1.13 169.56 10.60 0.00 0.00 347008.06 24272.59 307582.29 00:29:20.771 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.771 Verification LBA range: start 0x0 length 0x400 00:29:20.771 Nvme6n1 : 1.12 171.70 10.73 0.00 0.00 335866.44 23884.23 306028.85 00:29:20.771 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.771 Verification LBA range: start 0x0 length 0x400 00:29:20.771 Nvme7n1 : 1.23 208.26 13.02 0.00 0.00 274556.59 23884.23 299815.06 00:29:20.771 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.771 Verification LBA range: start 0x0 length 0x400 00:29:20.771 Nvme8n1 : 1.21 211.66 13.23 0.00 0.00 264813.23 25049.32 316902.97 00:29:20.771 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.771 Verification LBA range: start 0x0 length 0x400 00:29:20.771 Nvme9n1 : 1.24 207.04 12.94 0.00 0.00 266692.84 25049.32 321563.31 00:29:20.771 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.771 Verification LBA range: start 0x0 length 0x400 00:29:20.771 Nvme10n1 : 1.24 206.45 12.90 0.00 0.00 262700.37 23107.51 332437.43 00:29:20.771 [2024-12-06T00:06:53.480Z] =================================================================================================================== 00:29:20.771 [2024-12-06T00:06:53.480Z] Total : 1996.35 124.77 0.00 0.00 291807.41 5024.43 332437.43 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:21.705 rmmod nvme_tcp 00:29:21.705 rmmod nvme_fabrics 00:29:21.705 rmmod nvme_keyring 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3053825 ']' 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3053825 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3053825 ']' 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3053825 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3053825 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3053825' 00:29:21.705 killing process with pid 3053825 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3053825 00:29:21.705 01:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3053825 00:29:24.997 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:24.997 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:24.997 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:24.997 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:24.997 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:29:24.997 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:24.997 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:29:24.997 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:24.997 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:24.997 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.997 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.997 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:26.901 00:29:26.901 real 0m16.737s 00:29:26.901 user 0m52.993s 00:29:26.901 sys 0m3.819s 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:26.901 ************************************ 00:29:26.901 END TEST nvmf_shutdown_tc1 00:29:26.901 ************************************ 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:26.901 ************************************ 00:29:26.901 START TEST nvmf_shutdown_tc2 00:29:26.901 ************************************ 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:26.901 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:26.901 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.901 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:26.902 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:26.902 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:26.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:26.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:29:26.902 00:29:26.902 --- 10.0.0.2 ping statistics --- 00:29:26.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.902 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:26.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:26.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:29:26.902 00:29:26.902 --- 10.0.0.1 ping statistics --- 00:29:26.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.902 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3055717 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3055717 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3055717 ']' 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:26.902 01:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.902 [2024-12-06 01:06:59.413667] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:29:26.902 [2024-12-06 01:06:59.413805] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.902 [2024-12-06 01:06:59.557892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:27.161 [2024-12-06 01:06:59.684619] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:27.161 [2024-12-06 01:06:59.684718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:27.161 [2024-12-06 01:06:59.684741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:27.161 [2024-12-06 01:06:59.684761] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:27.161 [2024-12-06 01:06:59.684778] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:27.161 [2024-12-06 01:06:59.687531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:27.161 [2024-12-06 01:06:59.687595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:27.161 [2024-12-06 01:06:59.687668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.161 [2024-12-06 01:06:59.687674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:27.727 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:27.727 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:27.727 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:27.727 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:27.727 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.727 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:27.727 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:27.727 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.727 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.986 [2024-12-06 01:07:00.438995] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.986 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:27.987 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:27.987 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.987 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.987 Malloc1 00:29:27.987 [2024-12-06 01:07:00.597640] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:27.987 Malloc2 00:29:28.246 Malloc3 00:29:28.246 Malloc4 00:29:28.505 Malloc5 00:29:28.505 Malloc6 00:29:28.505 Malloc7 00:29:28.764 Malloc8 00:29:28.764 Malloc9 00:29:29.023 Malloc10 00:29:29.023 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.023 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:29.023 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:29.023 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.023 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3056032 00:29:29.023 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3056032 /var/tmp/bdevperf.sock 00:29:29.023 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3056032 ']' 00:29:29.023 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:29.023 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:29.023 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:29.023 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:29.023 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:29:29.023 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:29.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:29.023 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:29:29.023 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:29.023 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.023 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.023 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.023 { 00:29:29.023 "params": { 00:29:29.023 "name": "Nvme$subsystem", 00:29:29.023 "trtype": "$TEST_TRANSPORT", 00:29:29.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.023 "adrfam": "ipv4", 00:29:29.023 "trsvcid": "$NVMF_PORT", 00:29:29.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.023 "hdgst": ${hdgst:-false}, 00:29:29.023 "ddgst": ${ddgst:-false} 00:29:29.023 }, 00:29:29.023 "method": "bdev_nvme_attach_controller" 00:29:29.023 } 00:29:29.024 EOF 00:29:29.024 )") 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.024 { 00:29:29.024 "params": { 00:29:29.024 "name": "Nvme$subsystem", 00:29:29.024 "trtype": "$TEST_TRANSPORT", 00:29:29.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.024 "adrfam": "ipv4", 00:29:29.024 "trsvcid": "$NVMF_PORT", 00:29:29.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.024 "hdgst": ${hdgst:-false}, 00:29:29.024 "ddgst": ${ddgst:-false} 00:29:29.024 }, 00:29:29.024 "method": "bdev_nvme_attach_controller" 00:29:29.024 } 00:29:29.024 EOF 00:29:29.024 )") 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.024 { 00:29:29.024 "params": { 00:29:29.024 "name": "Nvme$subsystem", 00:29:29.024 "trtype": "$TEST_TRANSPORT", 00:29:29.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.024 "adrfam": "ipv4", 00:29:29.024 "trsvcid": "$NVMF_PORT", 00:29:29.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.024 "hdgst": ${hdgst:-false}, 00:29:29.024 "ddgst": ${ddgst:-false} 00:29:29.024 }, 00:29:29.024 "method": "bdev_nvme_attach_controller" 00:29:29.024 } 00:29:29.024 EOF 00:29:29.024 )") 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.024 { 00:29:29.024 "params": { 00:29:29.024 "name": "Nvme$subsystem", 00:29:29.024 "trtype": "$TEST_TRANSPORT", 00:29:29.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.024 "adrfam": "ipv4", 00:29:29.024 "trsvcid": "$NVMF_PORT", 00:29:29.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.024 "hdgst": ${hdgst:-false}, 00:29:29.024 "ddgst": ${ddgst:-false} 00:29:29.024 }, 00:29:29.024 "method": "bdev_nvme_attach_controller" 00:29:29.024 } 00:29:29.024 EOF 00:29:29.024 )") 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.024 { 00:29:29.024 "params": { 00:29:29.024 "name": "Nvme$subsystem", 00:29:29.024 "trtype": "$TEST_TRANSPORT", 00:29:29.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.024 "adrfam": "ipv4", 00:29:29.024 "trsvcid": "$NVMF_PORT", 00:29:29.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.024 "hdgst": ${hdgst:-false}, 00:29:29.024 "ddgst": ${ddgst:-false} 00:29:29.024 }, 00:29:29.024 "method": "bdev_nvme_attach_controller" 00:29:29.024 } 00:29:29.024 EOF 00:29:29.024 )") 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.024 { 00:29:29.024 "params": { 00:29:29.024 "name": "Nvme$subsystem", 00:29:29.024 "trtype": "$TEST_TRANSPORT", 00:29:29.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.024 "adrfam": "ipv4", 00:29:29.024 "trsvcid": "$NVMF_PORT", 00:29:29.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.024 "hdgst": ${hdgst:-false}, 00:29:29.024 "ddgst": ${ddgst:-false} 00:29:29.024 }, 00:29:29.024 "method": "bdev_nvme_attach_controller" 00:29:29.024 } 00:29:29.024 EOF 00:29:29.024 )") 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.024 { 00:29:29.024 "params": { 00:29:29.024 "name": "Nvme$subsystem", 00:29:29.024 "trtype": "$TEST_TRANSPORT", 00:29:29.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.024 "adrfam": "ipv4", 00:29:29.024 "trsvcid": "$NVMF_PORT", 00:29:29.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.024 "hdgst": ${hdgst:-false}, 00:29:29.024 "ddgst": ${ddgst:-false} 00:29:29.024 }, 00:29:29.024 "method": "bdev_nvme_attach_controller" 00:29:29.024 } 00:29:29.024 EOF 00:29:29.024 )") 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.024 { 00:29:29.024 "params": { 00:29:29.024 "name": "Nvme$subsystem", 00:29:29.024 "trtype": "$TEST_TRANSPORT", 00:29:29.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.024 "adrfam": "ipv4", 00:29:29.024 "trsvcid": "$NVMF_PORT", 00:29:29.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.024 "hdgst": ${hdgst:-false}, 00:29:29.024 "ddgst": ${ddgst:-false} 00:29:29.024 }, 00:29:29.024 "method": "bdev_nvme_attach_controller" 00:29:29.024 } 00:29:29.024 EOF 00:29:29.024 )") 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.024 { 00:29:29.024 "params": { 00:29:29.024 "name": "Nvme$subsystem", 00:29:29.024 "trtype": "$TEST_TRANSPORT", 00:29:29.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.024 "adrfam": "ipv4", 00:29:29.024 "trsvcid": "$NVMF_PORT", 00:29:29.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.024 "hdgst": ${hdgst:-false}, 00:29:29.024 "ddgst": ${ddgst:-false} 00:29:29.024 }, 00:29:29.024 "method": "bdev_nvme_attach_controller" 00:29:29.024 } 00:29:29.024 EOF 00:29:29.024 )") 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.024 { 00:29:29.024 "params": { 00:29:29.024 "name": "Nvme$subsystem", 00:29:29.024 "trtype": "$TEST_TRANSPORT", 00:29:29.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.024 "adrfam": "ipv4", 00:29:29.024 "trsvcid": "$NVMF_PORT", 00:29:29.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.024 "hdgst": ${hdgst:-false}, 00:29:29.024 "ddgst": ${ddgst:-false} 00:29:29.024 }, 00:29:29.024 "method": "bdev_nvme_attach_controller" 00:29:29.024 } 00:29:29.024 EOF 00:29:29.024 )") 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:29:29.024 01:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:29.024 "params": { 00:29:29.024 "name": "Nvme1", 00:29:29.024 "trtype": "tcp", 00:29:29.024 "traddr": "10.0.0.2", 00:29:29.024 "adrfam": "ipv4", 00:29:29.024 "trsvcid": "4420", 00:29:29.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:29.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:29.024 "hdgst": false, 00:29:29.024 "ddgst": false 00:29:29.024 }, 00:29:29.024 "method": "bdev_nvme_attach_controller" 00:29:29.024 },{ 00:29:29.024 "params": { 00:29:29.024 "name": "Nvme2", 00:29:29.024 "trtype": "tcp", 00:29:29.024 "traddr": "10.0.0.2", 00:29:29.024 "adrfam": "ipv4", 00:29:29.024 "trsvcid": "4420", 00:29:29.024 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:29.024 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:29.024 "hdgst": false, 00:29:29.024 "ddgst": false 00:29:29.024 }, 00:29:29.024 "method": "bdev_nvme_attach_controller" 00:29:29.024 },{ 00:29:29.024 "params": { 00:29:29.024 "name": "Nvme3", 00:29:29.024 "trtype": "tcp", 00:29:29.025 "traddr": "10.0.0.2", 00:29:29.025 "adrfam": "ipv4", 00:29:29.025 "trsvcid": "4420", 00:29:29.025 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:29.025 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:29.025 "hdgst": false, 00:29:29.025 "ddgst": false 00:29:29.025 }, 00:29:29.025 "method": "bdev_nvme_attach_controller" 00:29:29.025 },{ 00:29:29.025 "params": { 00:29:29.025 "name": "Nvme4", 00:29:29.025 "trtype": "tcp", 00:29:29.025 "traddr": "10.0.0.2", 00:29:29.025 "adrfam": "ipv4", 00:29:29.025 "trsvcid": "4420", 00:29:29.025 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:29.025 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:29.025 "hdgst": false, 00:29:29.025 "ddgst": false 00:29:29.025 }, 00:29:29.025 "method": "bdev_nvme_attach_controller" 00:29:29.025 },{ 00:29:29.025 "params": { 00:29:29.025 "name": "Nvme5", 00:29:29.025 "trtype": "tcp", 00:29:29.025 "traddr": "10.0.0.2", 00:29:29.025 "adrfam": "ipv4", 00:29:29.025 "trsvcid": "4420", 00:29:29.025 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:29.025 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:29.025 "hdgst": false, 00:29:29.025 "ddgst": false 00:29:29.025 }, 00:29:29.025 "method": "bdev_nvme_attach_controller" 00:29:29.025 },{ 00:29:29.025 "params": { 00:29:29.025 "name": "Nvme6", 00:29:29.025 "trtype": "tcp", 00:29:29.025 "traddr": "10.0.0.2", 00:29:29.025 "adrfam": "ipv4", 00:29:29.025 "trsvcid": "4420", 00:29:29.025 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:29.025 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:29.025 "hdgst": false, 00:29:29.025 "ddgst": false 00:29:29.025 }, 00:29:29.025 "method": "bdev_nvme_attach_controller" 00:29:29.025 },{ 00:29:29.025 "params": { 00:29:29.025 "name": "Nvme7", 00:29:29.025 "trtype": "tcp", 00:29:29.025 "traddr": "10.0.0.2", 00:29:29.025 "adrfam": "ipv4", 00:29:29.025 "trsvcid": "4420", 00:29:29.025 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:29.025 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:29.025 "hdgst": false, 00:29:29.025 "ddgst": false 00:29:29.025 }, 00:29:29.025 "method": "bdev_nvme_attach_controller" 00:29:29.025 },{ 00:29:29.025 "params": { 00:29:29.025 "name": "Nvme8", 00:29:29.025 "trtype": "tcp", 00:29:29.025 "traddr": "10.0.0.2", 00:29:29.025 "adrfam": "ipv4", 00:29:29.025 "trsvcid": "4420", 00:29:29.025 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:29.025 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:29.025 "hdgst": false, 00:29:29.025 "ddgst": false 00:29:29.025 }, 00:29:29.025 "method": "bdev_nvme_attach_controller" 00:29:29.025 },{ 00:29:29.025 "params": { 00:29:29.025 "name": "Nvme9", 00:29:29.025 "trtype": "tcp", 00:29:29.025 "traddr": "10.0.0.2", 00:29:29.025 "adrfam": "ipv4", 00:29:29.025 "trsvcid": "4420", 00:29:29.025 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:29.025 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:29.025 "hdgst": false, 00:29:29.025 "ddgst": false 00:29:29.025 }, 00:29:29.025 "method": "bdev_nvme_attach_controller" 00:29:29.025 },{ 00:29:29.025 "params": { 00:29:29.025 "name": "Nvme10", 00:29:29.025 "trtype": "tcp", 00:29:29.025 "traddr": "10.0.0.2", 00:29:29.025 "adrfam": "ipv4", 00:29:29.025 "trsvcid": "4420", 00:29:29.025 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:29.025 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:29.025 "hdgst": false, 00:29:29.025 "ddgst": false 00:29:29.025 }, 00:29:29.025 "method": "bdev_nvme_attach_controller" 00:29:29.025 }' 00:29:29.025 [2024-12-06 01:07:01.639630] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:29:29.025 [2024-12-06 01:07:01.639768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3056032 ] 00:29:29.283 [2024-12-06 01:07:01.782071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.283 [2024-12-06 01:07:01.911809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.817 Running I/O for 10 seconds... 00:29:31.817 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:31.817 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:31.817 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:31.817 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.817 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.817 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.817 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:31.817 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:31.817 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:31.817 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:31.817 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:31.817 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:31.817 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:31.817 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:31.817 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:31.817 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.817 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.817 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.817 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:31.817 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:31.817 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:32.076 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:32.076 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:32.076 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:32.076 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:32.076 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.076 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.076 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.076 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=79 00:29:32.076 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 79 -ge 100 ']' 00:29:32.076 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:32.334 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:32.334 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:32.334 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:32.334 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:32.334 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.334 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.334 01:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.334 01:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=146 00:29:32.334 01:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 146 -ge 100 ']' 00:29:32.334 01:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:32.334 01:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:32.334 01:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:32.334 01:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3056032 00:29:32.334 01:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3056032 ']' 00:29:32.334 01:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3056032 00:29:32.334 01:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:32.334 01:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.334 01:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3056032 00:29:32.600 1427.00 IOPS, 89.19 MiB/s [2024-12-06T00:07:05.309Z] 01:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:32.600 01:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:32.600 01:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3056032' 00:29:32.600 killing process with pid 3056032 00:29:32.600 01:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3056032 00:29:32.600 01:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3056032 00:29:32.600 Received shutdown signal, test time was about 1.133170 seconds 00:29:32.600 00:29:32.600 Latency(us) 00:29:32.600 [2024-12-06T00:07:05.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.600 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.600 Verification LBA range: start 0x0 length 0x400 00:29:32.600 Nvme1n1 : 1.08 191.54 11.97 0.00 0.00 323396.30 20000.62 312242.63 00:29:32.600 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.600 Verification LBA range: start 0x0 length 0x400 00:29:32.600 Nvme2n1 : 1.06 180.45 11.28 0.00 0.00 343685.25 36505.98 293601.28 00:29:32.600 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.600 Verification LBA range: start 0x0 length 0x400 00:29:32.600 Nvme3n1 : 1.12 228.56 14.28 0.00 0.00 266989.80 22330.79 306028.85 00:29:32.600 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.600 Verification LBA range: start 0x0 length 0x400 00:29:32.600 Nvme4n1 : 1.13 227.09 14.19 0.00 0.00 263713.00 19612.25 299815.06 00:29:32.600 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.600 Verification LBA range: start 0x0 length 0x400 00:29:32.600 Nvme5n1 : 1.11 173.14 10.82 0.00 0.00 338906.07 22816.24 327777.09 00:29:32.600 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.600 Verification LBA range: start 0x0 length 0x400 00:29:32.600 Nvme6n1 : 1.10 177.79 11.11 0.00 0.00 322452.72 3373.89 310689.19 00:29:32.600 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.600 Verification LBA range: start 0x0 length 0x400 00:29:32.600 Nvme7n1 : 1.13 226.09 14.13 0.00 0.00 250067.63 29127.11 313796.08 00:29:32.600 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.600 Verification LBA range: start 0x0 length 0x400 00:29:32.600 Nvme8n1 : 1.07 178.74 11.17 0.00 0.00 307322.12 20874.43 301368.51 00:29:32.600 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.600 Verification LBA range: start 0x0 length 0x400 00:29:32.600 Nvme9n1 : 1.09 175.99 11.00 0.00 0.00 306550.20 25243.50 302921.96 00:29:32.600 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.600 Verification LBA range: start 0x0 length 0x400 00:29:32.600 Nvme10n1 : 1.12 172.00 10.75 0.00 0.00 308341.06 24272.59 340204.66 00:29:32.600 [2024-12-06T00:07:05.309Z] =================================================================================================================== 00:29:32.600 [2024-12-06T00:07:05.309Z] Total : 1931.38 120.71 0.00 0.00 299457.30 3373.89 340204.66 00:29:33.531 01:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:34.463 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3055717 00:29:34.463 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:34.463 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:34.463 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:34.463 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:34.463 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:34.463 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:34.463 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:34.463 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:34.463 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:34.463 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:34.463 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:34.463 rmmod nvme_tcp 00:29:34.463 rmmod nvme_fabrics 00:29:34.463 rmmod nvme_keyring 00:29:34.722 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:34.722 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:34.722 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:34.722 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3055717 ']' 00:29:34.722 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3055717 00:29:34.722 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3055717 ']' 00:29:34.722 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3055717 00:29:34.722 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:34.722 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:34.722 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3055717 00:29:34.722 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:34.722 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:34.722 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3055717' 00:29:34.722 killing process with pid 3055717 00:29:34.722 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3055717 00:29:34.722 01:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3055717 00:29:37.253 01:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:37.253 01:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:37.253 01:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:37.253 01:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:37.253 01:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:37.253 01:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:37.253 01:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:37.253 01:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:37.253 01:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:37.253 01:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.253 01:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.253 01:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.792 01:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:39.792 00:29:39.792 real 0m12.841s 00:29:39.792 user 0m44.007s 00:29:39.792 sys 0m2.093s 00:29:39.792 01:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:39.792 01:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:39.792 ************************************ 00:29:39.792 END TEST nvmf_shutdown_tc2 00:29:39.792 ************************************ 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:39.792 ************************************ 00:29:39.792 START TEST nvmf_shutdown_tc3 00:29:39.792 ************************************ 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:39.792 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:39.792 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:39.792 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:39.792 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:39.793 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:39.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:39.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:29:39.793 00:29:39.793 --- 10.0.0.2 ping statistics --- 00:29:39.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:39.793 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:39.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:39.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:29:39.793 00:29:39.793 --- 10.0.0.1 ping statistics --- 00:29:39.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:39.793 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3057455 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3057455 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3057455 ']' 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:39.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:39.793 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:39.793 [2024-12-06 01:07:12.306251] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:29:39.793 [2024-12-06 01:07:12.306388] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:39.793 [2024-12-06 01:07:12.444179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:40.051 [2024-12-06 01:07:12.569186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:40.052 [2024-12-06 01:07:12.569285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:40.052 [2024-12-06 01:07:12.569308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:40.052 [2024-12-06 01:07:12.569328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:40.052 [2024-12-06 01:07:12.569345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:40.052 [2024-12-06 01:07:12.572043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:40.052 [2024-12-06 01:07:12.572129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:40.052 [2024-12-06 01:07:12.572239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.052 [2024-12-06 01:07:12.572245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:40.623 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:40.623 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:40.623 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:40.623 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:40.623 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:40.623 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:40.623 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:40.623 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.623 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:40.623 [2024-12-06 01:07:13.328402] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.880 01:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:40.880 Malloc1 00:29:40.880 [2024-12-06 01:07:13.482648] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:40.880 Malloc2 00:29:41.138 Malloc3 00:29:41.138 Malloc4 00:29:41.396 Malloc5 00:29:41.396 Malloc6 00:29:41.396 Malloc7 00:29:41.656 Malloc8 00:29:41.656 Malloc9 00:29:41.991 Malloc10 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3057707 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3057707 /var/tmp/bdevperf.sock 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3057707 ']' 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:41.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:41.991 { 00:29:41.991 "params": { 00:29:41.991 "name": "Nvme$subsystem", 00:29:41.991 "trtype": "$TEST_TRANSPORT", 00:29:41.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.991 "adrfam": "ipv4", 00:29:41.991 "trsvcid": "$NVMF_PORT", 00:29:41.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.991 "hdgst": ${hdgst:-false}, 00:29:41.991 "ddgst": ${ddgst:-false} 00:29:41.991 }, 00:29:41.991 "method": "bdev_nvme_attach_controller" 00:29:41.991 } 00:29:41.991 EOF 00:29:41.991 )") 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:41.991 { 00:29:41.991 "params": { 00:29:41.991 "name": "Nvme$subsystem", 00:29:41.991 "trtype": "$TEST_TRANSPORT", 00:29:41.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.991 "adrfam": "ipv4", 00:29:41.991 "trsvcid": "$NVMF_PORT", 00:29:41.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.991 "hdgst": ${hdgst:-false}, 00:29:41.991 "ddgst": ${ddgst:-false} 00:29:41.991 }, 00:29:41.991 "method": "bdev_nvme_attach_controller" 00:29:41.991 } 00:29:41.991 EOF 00:29:41.991 )") 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:41.991 { 00:29:41.991 "params": { 00:29:41.991 "name": "Nvme$subsystem", 00:29:41.991 "trtype": "$TEST_TRANSPORT", 00:29:41.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.991 "adrfam": "ipv4", 00:29:41.991 "trsvcid": "$NVMF_PORT", 00:29:41.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.991 "hdgst": ${hdgst:-false}, 00:29:41.991 "ddgst": ${ddgst:-false} 00:29:41.991 }, 00:29:41.991 "method": "bdev_nvme_attach_controller" 00:29:41.991 } 00:29:41.991 EOF 00:29:41.991 )") 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:41.991 { 00:29:41.991 "params": { 00:29:41.991 "name": "Nvme$subsystem", 00:29:41.991 "trtype": "$TEST_TRANSPORT", 00:29:41.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.991 "adrfam": "ipv4", 00:29:41.991 "trsvcid": "$NVMF_PORT", 00:29:41.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.991 "hdgst": ${hdgst:-false}, 00:29:41.991 "ddgst": ${ddgst:-false} 00:29:41.991 }, 00:29:41.991 "method": "bdev_nvme_attach_controller" 00:29:41.991 } 00:29:41.991 EOF 00:29:41.991 )") 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:41.991 { 00:29:41.991 "params": { 00:29:41.991 "name": "Nvme$subsystem", 00:29:41.991 "trtype": "$TEST_TRANSPORT", 00:29:41.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.991 "adrfam": "ipv4", 00:29:41.991 "trsvcid": "$NVMF_PORT", 00:29:41.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.991 "hdgst": ${hdgst:-false}, 00:29:41.991 "ddgst": ${ddgst:-false} 00:29:41.991 }, 00:29:41.991 "method": "bdev_nvme_attach_controller" 00:29:41.991 } 00:29:41.991 EOF 00:29:41.991 )") 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:41.991 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:41.991 { 00:29:41.991 "params": { 00:29:41.991 "name": "Nvme$subsystem", 00:29:41.991 "trtype": "$TEST_TRANSPORT", 00:29:41.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.991 "adrfam": "ipv4", 00:29:41.991 "trsvcid": "$NVMF_PORT", 00:29:41.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.991 "hdgst": ${hdgst:-false}, 00:29:41.992 "ddgst": ${ddgst:-false} 00:29:41.992 }, 00:29:41.992 "method": "bdev_nvme_attach_controller" 00:29:41.992 } 00:29:41.992 EOF 00:29:41.992 )") 00:29:41.992 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:41.992 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:41.992 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:41.992 { 00:29:41.992 "params": { 00:29:41.992 "name": "Nvme$subsystem", 00:29:41.992 "trtype": "$TEST_TRANSPORT", 00:29:41.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.992 "adrfam": "ipv4", 00:29:41.992 "trsvcid": "$NVMF_PORT", 00:29:41.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.992 "hdgst": ${hdgst:-false}, 00:29:41.992 "ddgst": ${ddgst:-false} 00:29:41.992 }, 00:29:41.992 "method": "bdev_nvme_attach_controller" 00:29:41.992 } 00:29:41.992 EOF 00:29:41.992 )") 00:29:41.992 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:41.992 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:41.992 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:41.992 { 00:29:41.992 "params": { 00:29:41.992 "name": "Nvme$subsystem", 00:29:41.992 "trtype": "$TEST_TRANSPORT", 00:29:41.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.992 "adrfam": "ipv4", 00:29:41.992 "trsvcid": "$NVMF_PORT", 00:29:41.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.992 "hdgst": ${hdgst:-false}, 00:29:41.992 "ddgst": ${ddgst:-false} 00:29:41.992 }, 00:29:41.992 "method": "bdev_nvme_attach_controller" 00:29:41.992 } 00:29:41.992 EOF 00:29:41.992 )") 00:29:41.992 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:41.992 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:41.992 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:41.992 { 00:29:41.992 "params": { 00:29:41.992 "name": "Nvme$subsystem", 00:29:41.992 "trtype": "$TEST_TRANSPORT", 00:29:41.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.992 "adrfam": "ipv4", 00:29:41.992 "trsvcid": "$NVMF_PORT", 00:29:41.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.992 "hdgst": ${hdgst:-false}, 00:29:41.992 "ddgst": ${ddgst:-false} 00:29:41.992 }, 00:29:41.992 "method": "bdev_nvme_attach_controller" 00:29:41.992 } 00:29:41.992 EOF 00:29:41.992 )") 00:29:41.992 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:41.992 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:41.992 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:41.992 { 00:29:41.992 "params": { 00:29:41.992 "name": "Nvme$subsystem", 00:29:41.992 "trtype": "$TEST_TRANSPORT", 00:29:41.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.992 "adrfam": "ipv4", 00:29:41.992 "trsvcid": "$NVMF_PORT", 00:29:41.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.992 "hdgst": ${hdgst:-false}, 00:29:41.992 "ddgst": ${ddgst:-false} 00:29:41.992 }, 00:29:41.992 "method": "bdev_nvme_attach_controller" 00:29:41.992 } 00:29:41.992 EOF 00:29:41.992 )") 00:29:41.992 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:41.992 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:41.992 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:41.992 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:41.992 "params": { 00:29:41.992 "name": "Nvme1", 00:29:41.992 "trtype": "tcp", 00:29:41.992 "traddr": "10.0.0.2", 00:29:41.992 "adrfam": "ipv4", 00:29:41.992 "trsvcid": "4420", 00:29:41.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:41.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:41.992 "hdgst": false, 00:29:41.992 "ddgst": false 00:29:41.992 }, 00:29:41.992 "method": "bdev_nvme_attach_controller" 00:29:41.992 },{ 00:29:41.992 "params": { 00:29:41.992 "name": "Nvme2", 00:29:41.992 "trtype": "tcp", 00:29:41.992 "traddr": "10.0.0.2", 00:29:41.992 "adrfam": "ipv4", 00:29:41.992 "trsvcid": "4420", 00:29:41.992 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:41.992 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:41.992 "hdgst": false, 00:29:41.992 "ddgst": false 00:29:41.992 }, 00:29:41.992 "method": "bdev_nvme_attach_controller" 00:29:41.992 },{ 00:29:41.992 "params": { 00:29:41.992 "name": "Nvme3", 00:29:41.992 "trtype": "tcp", 00:29:41.992 "traddr": "10.0.0.2", 00:29:41.992 "adrfam": "ipv4", 00:29:41.992 "trsvcid": "4420", 00:29:41.992 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:41.992 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:41.992 "hdgst": false, 00:29:41.992 "ddgst": false 00:29:41.992 }, 00:29:41.992 "method": "bdev_nvme_attach_controller" 00:29:41.992 },{ 00:29:41.992 "params": { 00:29:41.992 "name": "Nvme4", 00:29:41.992 "trtype": "tcp", 00:29:41.992 "traddr": "10.0.0.2", 00:29:41.992 "adrfam": "ipv4", 00:29:41.992 "trsvcid": "4420", 00:29:41.992 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:41.992 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:41.992 "hdgst": false, 00:29:41.992 "ddgst": false 00:29:41.992 }, 00:29:41.992 "method": "bdev_nvme_attach_controller" 00:29:41.992 },{ 00:29:41.992 "params": { 00:29:41.992 "name": "Nvme5", 00:29:41.992 "trtype": "tcp", 00:29:41.992 "traddr": "10.0.0.2", 00:29:41.992 "adrfam": "ipv4", 00:29:41.992 "trsvcid": "4420", 00:29:41.992 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:41.992 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:41.992 "hdgst": false, 00:29:41.992 "ddgst": false 00:29:41.992 }, 00:29:41.992 "method": "bdev_nvme_attach_controller" 00:29:41.992 },{ 00:29:41.992 "params": { 00:29:41.992 "name": "Nvme6", 00:29:41.992 "trtype": "tcp", 00:29:41.992 "traddr": "10.0.0.2", 00:29:41.992 "adrfam": "ipv4", 00:29:41.992 "trsvcid": "4420", 00:29:41.992 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:41.992 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:41.992 "hdgst": false, 00:29:41.992 "ddgst": false 00:29:41.992 }, 00:29:41.992 "method": "bdev_nvme_attach_controller" 00:29:41.992 },{ 00:29:41.992 "params": { 00:29:41.992 "name": "Nvme7", 00:29:41.992 "trtype": "tcp", 00:29:41.992 "traddr": "10.0.0.2", 00:29:41.992 "adrfam": "ipv4", 00:29:41.992 "trsvcid": "4420", 00:29:41.992 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:41.992 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:41.992 "hdgst": false, 00:29:41.992 "ddgst": false 00:29:41.992 }, 00:29:41.993 "method": "bdev_nvme_attach_controller" 00:29:41.993 },{ 00:29:41.993 "params": { 00:29:41.993 "name": "Nvme8", 00:29:41.993 "trtype": "tcp", 00:29:41.993 "traddr": "10.0.0.2", 00:29:41.993 "adrfam": "ipv4", 00:29:41.993 "trsvcid": "4420", 00:29:41.993 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:41.993 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:41.993 "hdgst": false, 00:29:41.993 "ddgst": false 00:29:41.993 }, 00:29:41.993 "method": "bdev_nvme_attach_controller" 00:29:41.993 },{ 00:29:41.993 "params": { 00:29:41.993 "name": "Nvme9", 00:29:41.993 "trtype": "tcp", 00:29:41.993 "traddr": "10.0.0.2", 00:29:41.993 "adrfam": "ipv4", 00:29:41.993 "trsvcid": "4420", 00:29:41.993 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:41.993 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:41.993 "hdgst": false, 00:29:41.993 "ddgst": false 00:29:41.993 }, 00:29:41.993 "method": "bdev_nvme_attach_controller" 00:29:41.993 },{ 00:29:41.993 "params": { 00:29:41.993 "name": "Nvme10", 00:29:41.993 "trtype": "tcp", 00:29:41.993 "traddr": "10.0.0.2", 00:29:41.993 "adrfam": "ipv4", 00:29:41.993 "trsvcid": "4420", 00:29:41.993 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:41.993 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:41.993 "hdgst": false, 00:29:41.993 "ddgst": false 00:29:41.993 }, 00:29:41.993 "method": "bdev_nvme_attach_controller" 00:29:41.993 }' 00:29:41.993 [2024-12-06 01:07:14.499635] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:29:41.993 [2024-12-06 01:07:14.499773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3057707 ] 00:29:41.993 [2024-12-06 01:07:14.638847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.322 [2024-12-06 01:07:14.767054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.220 Running I/O for 10 seconds... 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=83 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 83 -ge 100 ']' 00:29:44.787 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:45.059 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:45.059 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:45.059 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:45.059 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:45.059 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.059 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:45.059 1408.00 IOPS, 88.00 MiB/s [2024-12-06T00:07:17.768Z] 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.059 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:45.059 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:45.059 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:45.059 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:45.059 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:45.059 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3057455 00:29:45.059 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3057455 ']' 00:29:45.059 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3057455 00:29:45.060 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:45.060 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:45.060 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3057455 00:29:45.060 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:45.060 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:45.060 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3057455' 00:29:45.060 killing process with pid 3057455 00:29:45.060 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3057455 00:29:45.060 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3057455 00:29:45.060 [2024-12-06 01:07:17.638587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.638697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.638736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.638755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.638774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.638792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.638812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.638872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.638892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.638911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.638929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.638967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.638987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.639910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.645594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.645636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.645675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.645693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.645711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.645729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.645748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.060 [2024-12-06 01:07:17.645767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.645785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.645803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.645828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.645849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.645867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.645884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.645902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.645919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.645937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.645956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.645974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.645992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.646789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.650685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.650727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.650765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.650784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.650802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.650829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.650850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.650873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.650892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.650909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.650928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.650947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.650964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.650981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.650999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.651017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.651045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.651065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.651083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.651101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.651120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.651138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.651156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.651174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.651192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.061 [2024-12-06 01:07:17.651209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.651886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.654983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.655008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.655027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.655045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.655063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.655080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.655098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.655116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.655133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.655151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.655168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.655186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.655204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.655221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.062 [2024-12-06 01:07:17.655239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.655256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.655274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.655292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.655309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.655327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.655344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.655362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.655379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.655396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.655414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.655431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.655449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.655470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.655489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.655507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.655525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.655542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.658828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.658862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.658883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.658901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.658919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.658937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.658953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.658971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.658988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.659984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.662352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.063 [2024-12-06 01:07:17.662412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.662992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.663634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:45.064 [2024-12-06 01:07:17.664768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.064 [2024-12-06 01:07:17.664841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.064 [2024-12-06 01:07:17.664893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.064 [2024-12-06 01:07:17.664919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.064 [2024-12-06 01:07:17.664948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.064 [2024-12-06 01:07:17.664971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.064 [2024-12-06 01:07:17.664996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.064 [2024-12-06 01:07:17.665019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.065 [2024-12-06 01:07:17.665045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.065 [2024-12-06 01:07:17.665066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.065 [2024-12-06 01:07:17.665091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.065 [2024-12-06 01:07:17.665112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.065 [2024-12-06 01:07:17.665125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.065 [2024-12-06 01:07:17.665161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 01:07:17.665160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.065 with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.065 [2024-12-06 01:07:17.665202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.065 [2024-12-06 01:07:17.665220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-12-06 01:07:17.665239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:1with the state(6) to be set 00:29:45.065 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.065 [2024-12-06 01:07:17.665260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.065 [2024-12-06 01:07:17.665278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.065 [2024-12-06 01:07:17.665321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.065 [2024-12-06 01:07:17.665339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.065 [2024-12-06 01:07:17.665377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.065 [2024-12-06 01:07:17.665395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:1[2024-12-06 01:07:17.665413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.065 with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-12-06 01:07:17.665435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:45.065 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.065 [2024-12-06 01:07:17.665454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.065 [2024-12-06 01:07:17.665473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.065 [2024-12-06 01:07:17.665491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.065 [2024-12-06 01:07:17.665528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.065 [2024-12-06 01:07:17.665547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.065 [2024-12-06 01:07:17.665565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-12-06 01:07:17.665588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:45.065 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.065 [2024-12-06 01:07:17.665608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.065 [2024-12-06 01:07:17.665626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.065 [2024-12-06 01:07:17.665645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-12-06 01:07:17.665663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:1with the state(6) to be set 00:29:45.065 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.065 [2024-12-06 01:07:17.665684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.065 [2024-12-06 01:07:17.665702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.065 [2024-12-06 01:07:17.665720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.065 [2024-12-06 01:07:17.665739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.065 [2024-12-06 01:07:17.665776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.065 [2024-12-06 01:07:17.665794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.065 [2024-12-06 01:07:17.665812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-12-06 01:07:17.665840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:45.065 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.065 [2024-12-06 01:07:17.665866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.065 [2024-12-06 01:07:17.665885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.065 [2024-12-06 01:07:17.665912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.065 [2024-12-06 01:07:17.665931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-12-06 01:07:17.665950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:45.065 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.065 [2024-12-06 01:07:17.665970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.065 [2024-12-06 01:07:17.665978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.065 [2024-12-06 01:07:17.665992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.066 [2024-12-06 01:07:17.666002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.666011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.066 [2024-12-06 01:07:17.666028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:1[2024-12-06 01:07:17.666030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 with the state(6) to be set 00:29:45.066 [2024-12-06 01:07:17.666052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-12-06 01:07:17.666052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:45.066 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.666073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.066 [2024-12-06 01:07:17.666081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.666093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.066 [2024-12-06 01:07:17.666103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.666111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.066 [2024-12-06 01:07:17.666131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-12-06 01:07:17.666129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1with the state(6) to be set 00:29:45.066 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.666152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-12-06 01:07:17.666155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:45.066 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.666178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.066 [2024-12-06 01:07:17.666184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.666198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.066 [2024-12-06 01:07:17.666206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.666218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.066 [2024-12-06 01:07:17.666232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.666237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.066 [2024-12-06 01:07:17.666256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-12-06 01:07:17.666256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:45.066 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.666276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.066 [2024-12-06 01:07:17.666292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:1[2024-12-06 01:07:17.666296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 with the state(6) to be set 00:29:45.066 [2024-12-06 01:07:17.666317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-12-06 01:07:17.666317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:45.066 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.666338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.066 [2024-12-06 01:07:17.666345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.666357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.066 [2024-12-06 01:07:17.666368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.666376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:45.066 [2024-12-06 01:07:17.666394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.666417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.666443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.666466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.666491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.666517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.666545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.666568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.666594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.666616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.666641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.666663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.666688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.666710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.666735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.666757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.666782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.666804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.666837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.666860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.666885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.666908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.666933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.666955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.666980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.667002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.667027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.667050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.667081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.667103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.667133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.667155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.667180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.667203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.667227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.667249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.667274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.667296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.667321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.667343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.667368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.667390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.667415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.066 [2024-12-06 01:07:17.667437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.066 [2024-12-06 01:07:17.667463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.067 [2024-12-06 01:07:17.667485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.667510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.067 [2024-12-06 01:07:17.667532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.667560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.067 [2024-12-06 01:07:17.667583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.667609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.067 [2024-12-06 01:07:17.667631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.667656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.067 [2024-12-06 01:07:17.667679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.667705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.067 [2024-12-06 01:07:17.667731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.667758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.067 [2024-12-06 01:07:17.667781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.667806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.067 [2024-12-06 01:07:17.667852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.667885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.067 [2024-12-06 01:07:17.667909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.667934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.067 [2024-12-06 01:07:17.667958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.667985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.067 [2024-12-06 01:07:17.668007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.668033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.067 [2024-12-06 01:07:17.668055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.668139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.067 [2024-12-06 01:07:17.669144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.669178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.669206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.669228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.669251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.669272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.669295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.669316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.669337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:45.067 [2024-12-06 01:07:17.669433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.669461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.669494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.669517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.669539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.669560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.669582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.669602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.669621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:45.067 [2024-12-06 01:07:17.669704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.669732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.669755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.669776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.669798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.669825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.669851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.669873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.669893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:45.067 [2024-12-06 01:07:17.669963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.669991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.670014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.670035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.670057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.670077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.670099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.670120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.670139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:45.067 [2024-12-06 01:07:17.670210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.670242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.670266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.670287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.670309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.670330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.670352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.670373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.670392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:29:45.067 [2024-12-06 01:07:17.670459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.670486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.670509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.670530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.670552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.670574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.670595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.670616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.067 [2024-12-06 01:07:17.670635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:45.067 [2024-12-06 01:07:17.670694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.067 [2024-12-06 01:07:17.670721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.670745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.068 [2024-12-06 01:07:17.670767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.670789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.068 [2024-12-06 01:07:17.670810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.670846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.068 [2024-12-06 01:07:17.670869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.670894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:45.068 [2024-12-06 01:07:17.670965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.068 [2024-12-06 01:07:17.670993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.671017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.068 [2024-12-06 01:07:17.671039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.671061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.068 [2024-12-06 01:07:17.671082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.671105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.068 [2024-12-06 01:07:17.671126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.671146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:45.068 [2024-12-06 01:07:17.671211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.068 [2024-12-06 01:07:17.671238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.671265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.068 [2024-12-06 01:07:17.671299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.671324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.068 [2024-12-06 01:07:17.671345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.671366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.068 [2024-12-06 01:07:17.671386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.671406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:29:45.068 [2024-12-06 01:07:17.671478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.068 [2024-12-06 01:07:17.671505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.671528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.068 [2024-12-06 01:07:17.671549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.671570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.068 [2024-12-06 01:07:17.671591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.671612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.068 [2024-12-06 01:07:17.671638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.671658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:45.068 [2024-12-06 01:07:17.671755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.068 [2024-12-06 01:07:17.671786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.671846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.068 [2024-12-06 01:07:17.671874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.671901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.068 [2024-12-06 01:07:17.671923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.671948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.068 [2024-12-06 01:07:17.671970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.671995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.068 [2024-12-06 01:07:17.672016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.672041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.068 [2024-12-06 01:07:17.672063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.672088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.068 [2024-12-06 01:07:17.672111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.672137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.068 [2024-12-06 01:07:17.672159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.672185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.068 [2024-12-06 01:07:17.672207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.672234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.068 [2024-12-06 01:07:17.672256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.672282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.068 [2024-12-06 01:07:17.672304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.672331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.068 [2024-12-06 01:07:17.672359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.672386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.068 [2024-12-06 01:07:17.672409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.672436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.068 [2024-12-06 01:07:17.672459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.672485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.068 [2024-12-06 01:07:17.672508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.672533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.068 [2024-12-06 01:07:17.672555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.672580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.068 [2024-12-06 01:07:17.672603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.068 [2024-12-06 01:07:17.672628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.068 [2024-12-06 01:07:17.672650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.672674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.672696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.672721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.672743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.672768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.672790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.672820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.672846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.672872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.672895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.672920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.672942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.672972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.672995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.673021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.673043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.673068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.673091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.673116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.673139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.673178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.673200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.673226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.673247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.673273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.673294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.673319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.673340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.673365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.673387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.673412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.673433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.673466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.673489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.673515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.673536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.673561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.673588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.673614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.673637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.673662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.673683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.673708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.673730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.673755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.673777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.673802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.673830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.673858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.673880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.673905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.673927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.673952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.673974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.673999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.674021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.674046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.674068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.674093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.674114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.674140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.674162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.674191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.674214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.674240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.674261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.674285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.674307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.674331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.674352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.674377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.674398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.674423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.674444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.674469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.674491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.674516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.069 [2024-12-06 01:07:17.674537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.069 [2024-12-06 01:07:17.674561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.674583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.674608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.674629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.674653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.674675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.674699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.674721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.674745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.674771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.674798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.674826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.674854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.674882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.675273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.675304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.675339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.675362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.675388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.675409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.675435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.675457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.675482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.675503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.675528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.675550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.675575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.675598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.675623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.675645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.675670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.675692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.675718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.675739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.675770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.675793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.675826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.675851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.675876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.675898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.675923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.675945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.675971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.675993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.676017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.676041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.676067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.676090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.676115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.676138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.676163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.676185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.676211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.676233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.676258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.676280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.676306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.676328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.676354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.676384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.676411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.676433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.676459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.676481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.676507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.676529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.676554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.676576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.676601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.676623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.676665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.676688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.676713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.676736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.676762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.676784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.676810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.676841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.070 [2024-12-06 01:07:17.676868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.070 [2024-12-06 01:07:17.676890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.676916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.676938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.676963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.676985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.677015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.677038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.677064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.677086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.677111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.677133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.677158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.677180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.677205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.677228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.677253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.677275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.677300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.677322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.677348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.677370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.677394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.677416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.677441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.677463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.677489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.677511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.677537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.677559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.677583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.677611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.677637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.677660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.677685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.677708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.677733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.677756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.677781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.677804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.677837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.677861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.677886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.677909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.677935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.677957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.677982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.678005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.678030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.678052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.678077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.678099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.678124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.678146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.678171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.678193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.678222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.678246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.678271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.678292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.678317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.678339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.678365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.071 [2024-12-06 01:07:17.678387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.071 [2024-12-06 01:07:17.678410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa200 is same with the state(6) to be set 00:29:45.071 [2024-12-06 01:07:17.680740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:45.071 [2024-12-06 01:07:17.680843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:45.071 [2024-12-06 01:07:17.680948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:45.071 [2024-12-06 01:07:17.680996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:45.071 [2024-12-06 01:07:17.681050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:45.071 [2024-12-06 01:07:17.681097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:45.071 [2024-12-06 01:07:17.681137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:45.071 [2024-12-06 01:07:17.681176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:45.071 [2024-12-06 01:07:17.681213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:45.071 [2024-12-06 01:07:17.681250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:29:45.071 [2024-12-06 01:07:17.681286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:45.071 [2024-12-06 01:07:17.684249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:45.071 [2024-12-06 01:07:17.684317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:45.071 [2024-12-06 01:07:17.685786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.071 [2024-12-06 01:07:17.685854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:45.071 [2024-12-06 01:07:17.685885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:45.071 [2024-12-06 01:07:17.686003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.071 [2024-12-06 01:07:17.686041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:45.072 [2024-12-06 01:07:17.686077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:45.072 [2024-12-06 01:07:17.686196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.072 [2024-12-06 01:07:17.686231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:29:45.072 [2024-12-06 01:07:17.686255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:45.072 [2024-12-06 01:07:17.687467] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:45.072 [2024-12-06 01:07:17.687700] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:45.072 [2024-12-06 01:07:17.687794] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:45.072 [2024-12-06 01:07:17.687922] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:45.072 [2024-12-06 01:07:17.688016] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:45.072 [2024-12-06 01:07:17.688108] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:45.072 [2024-12-06 01:07:17.688206] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:45.072 [2024-12-06 01:07:17.688264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:45.072 [2024-12-06 01:07:17.688305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:45.072 [2024-12-06 01:07:17.688335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:45.072 [2024-12-06 01:07:17.688758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:45.072 [2024-12-06 01:07:17.688795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:45.072 [2024-12-06 01:07:17.688831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:45.072 [2024-12-06 01:07:17.688863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:45.072 [2024-12-06 01:07:17.688889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:45.072 [2024-12-06 01:07:17.688907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:45.072 [2024-12-06 01:07:17.688926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:45.072 [2024-12-06 01:07:17.688946] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:45.072 [2024-12-06 01:07:17.688966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:45.072 [2024-12-06 01:07:17.688985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:45.072 [2024-12-06 01:07:17.689003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:45.072 [2024-12-06 01:07:17.689022] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:45.072 [2024-12-06 01:07:17.691014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.691051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.691101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.691125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.691157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.691180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.691205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.691227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.691252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.691274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.691299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.691360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.691387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.691409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.691434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.691456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.691480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.691501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.691526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.691548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.691574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.691595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.691619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.691641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.691666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.691687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.691712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.691734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.691759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.691785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.691811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.691841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.691866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.691888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.691912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.691934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.691959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.691980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.692006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.692028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.692053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.692074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.692099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.692121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.692146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.692168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.692192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.692214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.692238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.692259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.692284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.692305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.692330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.692352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.072 [2024-12-06 01:07:17.692381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.072 [2024-12-06 01:07:17.692403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.692428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.692450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.692475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.692497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.692522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.692544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.692568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.692590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.692614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.692636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.692660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.692682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.692707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.692728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.692752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.692774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.692798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.692834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.692863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.692885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.692910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.692931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.692956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.692982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.693030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.693075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.693123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.693169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.693215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.693262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.693308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.693353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.693399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.693445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.693492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.693538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.693589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.693636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.693682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.693728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.693774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.693828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.693878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.693923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.693969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.693995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.694017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.694041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.073 [2024-12-06 01:07:17.694062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.073 [2024-12-06 01:07:17.694086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.694108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.694130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa480 is same with the state(6) to be set 00:29:45.074 [2024-12-06 01:07:17.695737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.695770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.695826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.695851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.695877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.695899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.695924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.695946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.695970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.695993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.696061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.696084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.696109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.696131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.696155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.696178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.696202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.696224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.696249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.696271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.696295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.696317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.696342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.696363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.696388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.696415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.696440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.696463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.696487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.696509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.696534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.696555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.696580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.696602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.696626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.696647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.696672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.696693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.696718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.696740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.696764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.696786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.696810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.696851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.696890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.696913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.696937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.696958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.696984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.697011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.697040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.697063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.697087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.697109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.697135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.697156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.697181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.697203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.697227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.697249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.697274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.697296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.697320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.697341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.697366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.697387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.697411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.697433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.697457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.697479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.697504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.697525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.697549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.697571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.697596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.074 [2024-12-06 01:07:17.697621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.074 [2024-12-06 01:07:17.697647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.697669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.697694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.697716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.697739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.697761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.697785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.697806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.697840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.697863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.697888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.697909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.697933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.697955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.697979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.698001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.698025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.698047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.698071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.698093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.698118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.698139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.698163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.698185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.698214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.698237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.698261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.698282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.698307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.698329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.698354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.698375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.698399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.698421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.698446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.698468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.698492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.698513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.698537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.698558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.698583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.698604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.698628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.698649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.698673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.698696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.698720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.698742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.698765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.698792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.698825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.698850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.698873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa980 is same with the state(6) to be set 00:29:45.075 [2024-12-06 01:07:17.700411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.700457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.700490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.700513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.700537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.700559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.700583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.700606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.700631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.700665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.700692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.700715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.700738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.700760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.700784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.700807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.700840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.700863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.700887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.700909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.700933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.700960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.700985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.701007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.701032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.701054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.075 [2024-12-06 01:07:17.701078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.075 [2024-12-06 01:07:17.701100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.701125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.701147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.701172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.701195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.701220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.701241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.701266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.701287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.701311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.701333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.701357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.701379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.701403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.701424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.701449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.701470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.701494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.701516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.701544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.701567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.701591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.701614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.701638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.701661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.701686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.701707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.701732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.701755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.701779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.701800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.701837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.701861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.701886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.701909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.701933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.701954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.701979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.702001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.702025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.702048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.702072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.702095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.702119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.702146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.702171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.702194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.702218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.702240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.702264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.702286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.702311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.702333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.702357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.702380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.702405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.702427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.702451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.702473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.702497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.702520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.702544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.702566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.702589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.702611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.702635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.702658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.702681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.702703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.702727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.702754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.702779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.702802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.702834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.702858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.702884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.702907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.702931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.076 [2024-12-06 01:07:17.702954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.076 [2024-12-06 01:07:17.702978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.703000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.703024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.703047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.703071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.703094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.703118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.703140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.703165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.703186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.703210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.703232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.703257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.703280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.703304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.703326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.703355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.703378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.703402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.703425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.703449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.703470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.703492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fac00 is same with the state(6) to be set 00:29:45.077 [2024-12-06 01:07:17.705047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.705078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.705109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.705131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.705157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.705180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.705204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.705226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.705269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.705291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.705315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.705337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.705361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.705383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.705407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.705428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.705452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.705475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.705503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.705526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.705551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.705572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.705596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.705618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.705643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.705665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.705689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.705711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.705734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.705756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.705780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.705802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.705838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.705862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.705886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.705909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.705933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.705955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.705980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.706002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.706026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.706048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.077 [2024-12-06 01:07:17.706073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.077 [2024-12-06 01:07:17.706099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.706125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.706146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.706170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.706192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.706216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.706237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.706262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.706284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.706307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.706329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.706360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.706382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.706407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.706429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.706453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.706475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.706498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.706520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.706544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.706565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.706589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.706611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.706636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.706657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.706686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.706709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.706733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.706754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.706779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.706801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.706834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.706857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.706882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.706904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.706928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.706949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.706973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.706996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.707020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.707041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.707065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.707087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.707110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.707133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.707157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.707179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.707204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.707226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.707250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.707276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.707301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.707323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.707347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.707368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.707392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.707413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.707437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.707459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.707483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.707504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.707529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.707551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.707575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.707597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.707621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.707643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.707668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.707689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.707714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.707736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.707760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.707782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.707807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.707836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.707866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.707889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.707914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.078 [2024-12-06 01:07:17.707936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.078 [2024-12-06 01:07:17.707961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.707982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.708006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.708027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.708051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.708072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.708094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fae80 is same with the state(6) to be set 00:29:45.079 [2024-12-06 01:07:17.709619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.709664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.709696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.709719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.709743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.709765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.709791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.709834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.709862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.709884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.709908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.709930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.709955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.709976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.710029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.710075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.710121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.710167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.710214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.710259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.710305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.710351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.710397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.710443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.710489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.710535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.710585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.710632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.710679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.710725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.710771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.710824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.710872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.710919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.710965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.710990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.711012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.711038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.711059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.711084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.711105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.711130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.711152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.711180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.711202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.711227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.711248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.711274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.711295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.079 [2024-12-06 01:07:17.711319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.079 [2024-12-06 01:07:17.711341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.711365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.711387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.711412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.711433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.711458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.711480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.711505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.711526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.711550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.711572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.711597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.711618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.711643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.711665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.711689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.711711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.711736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.711761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.711787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.711808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.711841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.711863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.711889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.711910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.711935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.711956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.711980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.712002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.712026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.712048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.712072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.712093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.712119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.712140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.712166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.712187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.712213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.712234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.712259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.712281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.712305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.712326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.712355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.712378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.712403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.712424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.712447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.712469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.712492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.712514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.712540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.712561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.712587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.712608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.712633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.712655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.712677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb100 is same with the state(6) to be set 00:29:45.080 [2024-12-06 01:07:17.714248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.714293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.714335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.714361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.714387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.714408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.714447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.714469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.714493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.714514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.714545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.714569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.714593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.714615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.714639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.714661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.714685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.714707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.714732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.714754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.714779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.080 [2024-12-06 01:07:17.714801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.080 [2024-12-06 01:07:17.714834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.714858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.714883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.714904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.714928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.714950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.714974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.714995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.715019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.715041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.715065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.715086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.715111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.715137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.715163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.715185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.715210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.715231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.715255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.715277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.715300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.715322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.715346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.715368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.715392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.715414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.715438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.715459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.715483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.715505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.715529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.715551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.715576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.715597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.715621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.715642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.715666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.715688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.715712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.715737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.715762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.715784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.715808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.715837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.715863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.715886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.715910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.715932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.715957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.715979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.716003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.716025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.716049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.716070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.716094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.716116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.716140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.716162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.716187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.716208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.716233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.716255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.716279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.716301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.716329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.716351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.716375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.716397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.716421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.716442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.716467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.716488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.716512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.716534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.716558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.716580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.716604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.716626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.081 [2024-12-06 01:07:17.716650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.081 [2024-12-06 01:07:17.716672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.716696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.716717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.716741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.716763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.716787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.716808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.716840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.716862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.716887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.716913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.716939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.716961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.716985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.717008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.717032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.717054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.717078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.717099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.717123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.717145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.717169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.717191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.717215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.717236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.717261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.717283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.717304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb380 is same with the state(6) to be set 00:29:45.082 [2024-12-06 01:07:17.718857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.718889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.718920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.718943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.718967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.719001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.719028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.719056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.719082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.719104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.719129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.719151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.719175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.719198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.719222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.719243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.719268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.719290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.719314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.719335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.719360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.719382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.719406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.719429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.719454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.719476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.719500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.719522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.719546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.719568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.719592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.719614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.719642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.719665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.719689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.719711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.719735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.719758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.719782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.719805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.719838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.719861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.082 [2024-12-06 01:07:17.719884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.082 [2024-12-06 01:07:17.719906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.719930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.719952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.719976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.719998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.720022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.720043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.720068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.720089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.720114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.720136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.720160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.720183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.720207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.720233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.720258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.720280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.720305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.720327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.720351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.720373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.720397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.720419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.720443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.720465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.720489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.720511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.720536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.720559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.720583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.720604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.720628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.720649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.720674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.720695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.720719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.720741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.720766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.720788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.720823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.720848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.720872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.720895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.720919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.720941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.720965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.720987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.721011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.721033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.721057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.721078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.721102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.721124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.721149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.721171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.721195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.721216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.721240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.721263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.721287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.721308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.721332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.721354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.721377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.721404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.721429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.721452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.721475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.721496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.721521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.721543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.721566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.721588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.721612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.721634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.721658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.721681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.721705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.083 [2024-12-06 01:07:17.721726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.083 [2024-12-06 01:07:17.721750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.084 [2024-12-06 01:07:17.721771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.084 [2024-12-06 01:07:17.721795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.084 [2024-12-06 01:07:17.721826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.084 [2024-12-06 01:07:17.721854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.084 [2024-12-06 01:07:17.721876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.084 [2024-12-06 01:07:17.721898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb600 is same with the state(6) to be set 00:29:45.084 [2024-12-06 01:07:17.726572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:45.084 [2024-12-06 01:07:17.726637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:45.084 [2024-12-06 01:07:17.726670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:45.084 [2024-12-06 01:07:17.726700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:45.084 [2024-12-06 01:07:17.726898] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:45.084 [2024-12-06 01:07:17.726951] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:45.084 [2024-12-06 01:07:17.726985] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:45.084 [2024-12-06 01:07:17.727166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:45.084 [2024-12-06 01:07:17.727205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:45.084 task offset: 25856 on job bdev=Nvme4n1 fails 00:29:45.084 00:29:45.084 Latency(us) 00:29:45.084 [2024-12-06T00:07:17.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.084 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:45.084 Job: Nvme1n1 ended in about 1.12 seconds with error 00:29:45.084 Verification LBA range: start 0x0 length 0x400 00:29:45.084 Nvme1n1 : 1.12 171.62 10.73 57.21 0.00 276933.40 20777.34 304475.40 00:29:45.084 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:45.084 Job: Nvme2n1 ended in about 1.12 seconds with error 00:29:45.084 Verification LBA range: start 0x0 length 0x400 00:29:45.084 Nvme2n1 : 1.12 171.45 10.72 57.15 0.00 272292.03 18544.26 321563.31 00:29:45.084 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:45.084 Job: Nvme3n1 ended in about 1.13 seconds with error 00:29:45.084 Verification LBA range: start 0x0 length 0x400 00:29:45.084 Nvme3n1 : 1.13 169.67 10.60 56.56 0.00 270295.42 26020.22 298261.62 00:29:45.084 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:45.084 Job: Nvme4n1 ended in about 1.12 seconds with error 00:29:45.084 Verification LBA range: start 0x0 length 0x400 00:29:45.084 Nvme4n1 : 1.12 171.99 10.75 57.33 0.00 261515.38 14466.47 327777.09 00:29:45.084 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:45.084 Job: Nvme5n1 ended in about 1.14 seconds with error 00:29:45.084 Verification LBA range: start 0x0 length 0x400 00:29:45.084 Nvme5n1 : 1.14 112.64 7.04 56.32 0.00 348763.02 24078.41 351078.78 00:29:45.084 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:45.084 Job: Nvme6n1 ended in about 1.14 seconds with error 00:29:45.084 Verification LBA range: start 0x0 length 0x400 00:29:45.084 Nvme6n1 : 1.14 168.28 10.52 56.09 0.00 257606.92 21651.15 298261.62 00:29:45.084 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:45.084 Job: Nvme7n1 ended in about 1.15 seconds with error 00:29:45.084 Verification LBA range: start 0x0 length 0x400 00:29:45.084 Nvme7n1 : 1.15 111.74 6.98 55.87 0.00 338601.66 58254.22 296708.17 00:29:45.084 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:45.084 Job: Nvme8n1 ended in about 1.15 seconds with error 00:29:45.084 Verification LBA range: start 0x0 length 0x400 00:29:45.084 Nvme8n1 : 1.15 111.30 6.96 55.65 0.00 333493.03 23787.14 299815.06 00:29:45.084 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:45.084 Job: Nvme9n1 ended in about 1.15 seconds with error 00:29:45.084 Verification LBA range: start 0x0 length 0x400 00:29:45.084 Nvme9n1 : 1.15 110.85 6.93 55.43 0.00 328377.84 30292.20 306028.85 00:29:45.084 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:45.084 Job: Nvme10n1 ended in about 1.16 seconds with error 00:29:45.084 Verification LBA range: start 0x0 length 0x400 00:29:45.084 Nvme10n1 : 1.16 110.41 6.90 55.21 0.00 323319.78 23884.23 318456.41 00:29:45.084 [2024-12-06T00:07:17.793Z] =================================================================================================================== 00:29:45.084 [2024-12-06T00:07:17.793Z] Total : 1409.96 88.12 562.81 0.00 296349.67 14466.47 351078.78 00:29:45.343 [2024-12-06 01:07:17.815011] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:45.343 [2024-12-06 01:07:17.815128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:45.343 [2024-12-06 01:07:17.815528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.343 [2024-12-06 01:07:17.815576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3900 with addr=10.0.0.2, port=4420 00:29:45.343 [2024-12-06 01:07:17.815605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:45.343 [2024-12-06 01:07:17.815724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.343 [2024-12-06 01:07:17.815759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4d00 with addr=10.0.0.2, port=4420 00:29:45.343 [2024-12-06 01:07:17.815782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:45.343 [2024-12-06 01:07:17.815956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.343 [2024-12-06 01:07:17.815991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5700 with addr=10.0.0.2, port=4420 00:29:45.343 [2024-12-06 01:07:17.816014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:45.343 [2024-12-06 01:07:17.816127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.343 [2024-12-06 01:07:17.816161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6100 with addr=10.0.0.2, port=4420 00:29:45.343 [2024-12-06 01:07:17.816183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:29:45.343 [2024-12-06 01:07:17.819688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:45.343 [2024-12-06 01:07:17.819752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:45.343 [2024-12-06 01:07:17.819989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.343 [2024-12-06 01:07:17.820026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6b00 with addr=10.0.0.2, port=4420 00:29:45.343 [2024-12-06 01:07:17.820051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:29:45.343 [2024-12-06 01:07:17.820221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.343 [2024-12-06 01:07:17.820256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:29:45.343 [2024-12-06 01:07:17.820279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:45.343 [2024-12-06 01:07:17.820368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.343 [2024-12-06 01:07:17.820402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7f00 with addr=10.0.0.2, port=4420 00:29:45.343 [2024-12-06 01:07:17.820425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:45.343 [2024-12-06 01:07:17.820460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:45.343 [2024-12-06 01:07:17.820494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:45.343 [2024-12-06 01:07:17.820523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:45.343 [2024-12-06 01:07:17.820551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:45.343 [2024-12-06 01:07:17.820633] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:45.343 [2024-12-06 01:07:17.820679] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:45.343 [2024-12-06 01:07:17.820714] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:29:45.343 [2024-12-06 01:07:17.820745] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:29:45.343 [2024-12-06 01:07:17.820772] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:45.343 [2024-12-06 01:07:17.821610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:45.343 [2024-12-06 01:07:17.821805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.343 [2024-12-06 01:07:17.821864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:29:45.343 [2024-12-06 01:07:17.821888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:45.343 [2024-12-06 01:07:17.821985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.343 [2024-12-06 01:07:17.822019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:45.343 [2024-12-06 01:07:17.822041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:45.343 [2024-12-06 01:07:17.822079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:29:45.343 [2024-12-06 01:07:17.822108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:45.343 [2024-12-06 01:07:17.822136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:45.343 [2024-12-06 01:07:17.822161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:45.343 [2024-12-06 01:07:17.822182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:45.343 [2024-12-06 01:07:17.822206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:45.343 [2024-12-06 01:07:17.822231] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:45.343 [2024-12-06 01:07:17.822254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:45.343 [2024-12-06 01:07:17.822272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:45.343 [2024-12-06 01:07:17.822291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:45.343 [2024-12-06 01:07:17.822310] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:45.343 [2024-12-06 01:07:17.822330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:45.343 [2024-12-06 01:07:17.822349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:45.343 [2024-12-06 01:07:17.822367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:45.343 [2024-12-06 01:07:17.822386] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:45.343 [2024-12-06 01:07:17.822411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:45.343 [2024-12-06 01:07:17.822432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:45.343 [2024-12-06 01:07:17.822450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:45.343 [2024-12-06 01:07:17.822468] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:45.343 [2024-12-06 01:07:17.822825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.343 [2024-12-06 01:07:17.822872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:45.343 [2024-12-06 01:07:17.822895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:45.343 [2024-12-06 01:07:17.822922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:45.343 [2024-12-06 01:07:17.822951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:45.343 [2024-12-06 01:07:17.822975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:45.343 [2024-12-06 01:07:17.822995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:45.343 [2024-12-06 01:07:17.823014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:45.343 [2024-12-06 01:07:17.823034] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:45.343 [2024-12-06 01:07:17.823066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:45.343 [2024-12-06 01:07:17.823084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:45.343 [2024-12-06 01:07:17.823102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:45.343 [2024-12-06 01:07:17.823120] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:45.343 [2024-12-06 01:07:17.823140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:45.344 [2024-12-06 01:07:17.823158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:45.344 [2024-12-06 01:07:17.823177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:45.344 [2024-12-06 01:07:17.823194] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:45.344 [2024-12-06 01:07:17.823257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:45.344 [2024-12-06 01:07:17.823288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:45.344 [2024-12-06 01:07:17.823308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:45.344 [2024-12-06 01:07:17.823327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:45.344 [2024-12-06 01:07:17.823346] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:45.344 [2024-12-06 01:07:17.823366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:45.344 [2024-12-06 01:07:17.823385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:45.344 [2024-12-06 01:07:17.823404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:45.344 [2024-12-06 01:07:17.823429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:45.344 [2024-12-06 01:07:17.823489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:45.344 [2024-12-06 01:07:17.823514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:45.344 [2024-12-06 01:07:17.823534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:45.344 [2024-12-06 01:07:17.823553] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:47.873 01:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3057707 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3057707 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3057707 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:48.812 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:48.812 rmmod nvme_tcp 00:29:48.812 rmmod nvme_fabrics 00:29:48.812 rmmod nvme_keyring 00:29:49.071 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:49.071 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:49.071 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:49.071 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3057455 ']' 00:29:49.071 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3057455 00:29:49.071 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3057455 ']' 00:29:49.071 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3057455 00:29:49.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3057455) - No such process 00:29:49.071 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3057455 is not found' 00:29:49.071 Process with pid 3057455 is not found 00:29:49.071 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:49.071 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:49.071 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:49.071 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:49.071 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:49.071 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:49.071 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:49.071 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:49.071 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:49.071 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.071 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.071 01:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:50.977 00:29:50.977 real 0m11.547s 00:29:50.977 user 0m34.258s 00:29:50.977 sys 0m1.994s 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:50.977 ************************************ 00:29:50.977 END TEST nvmf_shutdown_tc3 00:29:50.977 ************************************ 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:50.977 ************************************ 00:29:50.977 START TEST nvmf_shutdown_tc4 00:29:50.977 ************************************ 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:50.977 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:50.978 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:50.978 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:50.978 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:50.978 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:50.978 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:51.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:51.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:29:51.238 00:29:51.238 --- 10.0.0.2 ping statistics --- 00:29:51.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.238 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:51.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:51.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:29:51.238 00:29:51.238 --- 10.0.0.1 ping statistics --- 00:29:51.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.238 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3058945 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3058945 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3058945 ']' 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.238 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:51.496 [2024-12-06 01:07:23.955739] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:29:51.496 [2024-12-06 01:07:23.955924] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.496 [2024-12-06 01:07:24.102854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:51.755 [2024-12-06 01:07:24.232283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.755 [2024-12-06 01:07:24.232371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.755 [2024-12-06 01:07:24.232393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.755 [2024-12-06 01:07:24.232413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.755 [2024-12-06 01:07:24.232429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.755 [2024-12-06 01:07:24.235118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:51.755 [2024-12-06 01:07:24.235246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:51.755 [2024-12-06 01:07:24.235291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.755 [2024-12-06 01:07:24.235297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:52.320 01:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:52.320 01:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:52.320 01:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:52.320 01:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:52.320 01:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:52.320 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.320 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:52.320 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.320 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:52.320 [2024-12-06 01:07:25.026351] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.577 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:52.577 Malloc1 00:29:52.577 [2024-12-06 01:07:25.189534] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.577 Malloc2 00:29:52.835 Malloc3 00:29:52.835 Malloc4 00:29:53.093 Malloc5 00:29:53.093 Malloc6 00:29:53.093 Malloc7 00:29:53.351 Malloc8 00:29:53.351 Malloc9 00:29:53.609 Malloc10 00:29:53.609 01:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.609 01:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:53.609 01:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:53.609 01:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:53.609 01:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3059265 00:29:53.609 01:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:53.609 01:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:53.609 [2024-12-06 01:07:26.236991] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:58.875 01:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:58.875 01:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3058945 00:29:58.875 01:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3058945 ']' 00:29:58.875 01:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3058945 00:29:58.875 01:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:58.875 01:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:58.875 01:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3058945 00:29:58.875 01:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:58.875 01:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:58.875 01:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3058945' 00:29:58.875 killing process with pid 3058945 00:29:58.875 01:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3058945 00:29:58.875 01:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3058945 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 [2024-12-06 01:07:31.190350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.875 starting I/O failed: -6 00:29:58.875 starting I/O failed: -6 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 [2024-12-06 01:07:31.192721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.875 Write completed with error (sct=0, sc=8) 00:29:58.875 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 [2024-12-06 01:07:31.195424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 [2024-12-06 01:07:31.205144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.876 NVMe io qpair process completion error 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.876 starting I/O failed: -6 00:29:58.876 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 [2024-12-06 01:07:31.207128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 [2024-12-06 01:07:31.209314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 [2024-12-06 01:07:31.212063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.877 starting I/O failed: -6 00:29:58.877 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 [2024-12-06 01:07:31.221529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.878 NVMe io qpair process completion error 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 [2024-12-06 01:07:31.223555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.878 starting I/O failed: -6 00:29:58.878 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 [2024-12-06 01:07:31.225721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 [2024-12-06 01:07:31.228371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.879 Write completed with error (sct=0, sc=8) 00:29:58.879 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 [2024-12-06 01:07:31.241892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.880 NVMe io qpair process completion error 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 [2024-12-06 01:07:31.244131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 [2024-12-06 01:07:31.246086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.880 starting I/O failed: -6 00:29:58.880 starting I/O failed: -6 00:29:58.880 starting I/O failed: -6 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 starting I/O failed: -6 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.880 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 [2024-12-06 01:07:31.249076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 [2024-12-06 01:07:31.261810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.881 NVMe io qpair process completion error 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.881 starting I/O failed: -6 00:29:58.881 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 [2024-12-06 01:07:31.263712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 [2024-12-06 01:07:31.265938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.882 starting I/O failed: -6 00:29:58.882 [2024-12-06 01:07:31.268589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.882 starting I/O failed: -6 00:29:58.882 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 [2024-12-06 01:07:31.284336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.883 NVMe io qpair process completion error 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 [2024-12-06 01:07:31.286448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.883 starting I/O failed: -6 00:29:58.883 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 [2024-12-06 01:07:31.288724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 [2024-12-06 01:07:31.291413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.884 starting I/O failed: -6 00:29:58.884 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 [2024-12-06 01:07:31.302617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.885 NVMe io qpair process completion error 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 [2024-12-06 01:07:31.304733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.885 starting I/O failed: -6 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 [2024-12-06 01:07:31.306709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.885 starting I/O failed: -6 00:29:58.885 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 [2024-12-06 01:07:31.309461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 [2024-12-06 01:07:31.319292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.886 NVMe io qpair process completion error 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 starting I/O failed: -6 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.886 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 [2024-12-06 01:07:31.321208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 [2024-12-06 01:07:31.323390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.887 starting I/O failed: -6 00:29:58.887 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 [2024-12-06 01:07:31.326219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 [2024-12-06 01:07:31.338570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.888 NVMe io qpair process completion error 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 [2024-12-06 01:07:31.340583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.888 starting I/O failed: -6 00:29:58.888 starting I/O failed: -6 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 starting I/O failed: -6 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.888 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 [2024-12-06 01:07:31.342701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 [2024-12-06 01:07:31.345561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.889 starting I/O failed: -6 00:29:58.889 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 [2024-12-06 01:07:31.358032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.890 NVMe io qpair process completion error 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 [2024-12-06 01:07:31.360060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 [2024-12-06 01:07:31.362317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.890 starting I/O failed: -6 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.890 Write completed with error (sct=0, sc=8) 00:29:58.890 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 [2024-12-06 01:07:31.365178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 Write completed with error (sct=0, sc=8) 00:29:58.891 starting I/O failed: -6 00:29:58.891 [2024-12-06 01:07:31.380678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.891 NVMe io qpair process completion error 00:29:58.891 Initializing NVMe Controllers 00:29:58.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:58.892 Controller IO queue size 128, less than required. 00:29:58.892 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:58.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:58.892 Controller IO queue size 128, less than required. 00:29:58.892 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:58.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:58.892 Controller IO queue size 128, less than required. 00:29:58.892 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:58.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:58.892 Controller IO queue size 128, less than required. 00:29:58.892 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:58.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:58.892 Controller IO queue size 128, less than required. 00:29:58.892 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:58.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:58.892 Controller IO queue size 128, less than required. 00:29:58.892 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:58.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:58.892 Controller IO queue size 128, less than required. 00:29:58.892 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:58.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:58.892 Controller IO queue size 128, less than required. 00:29:58.892 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:58.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:58.892 Controller IO queue size 128, less than required. 00:29:58.892 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:58.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:58.892 Controller IO queue size 128, less than required. 00:29:58.892 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:58.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:58.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:58.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:58.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:58.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:58.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:58.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:58.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:58.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:58.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:58.892 Initialization complete. Launching workers. 00:29:58.892 ======================================================== 00:29:58.892 Latency(us) 00:29:58.892 Device Information : IOPS MiB/s Average min max 00:29:58.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1428.35 61.37 89652.99 1580.88 188982.85 00:29:58.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1495.99 64.28 85769.46 1603.90 203577.64 00:29:58.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1474.39 63.35 87180.37 2276.64 215272.87 00:29:58.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1480.28 63.61 87012.07 1587.58 230925.16 00:29:58.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1452.35 62.41 88891.61 2254.11 278085.48 00:29:58.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1465.22 62.96 84706.53 1695.57 159724.19 00:29:58.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1425.95 61.27 87185.17 2268.06 153577.29 00:29:58.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1422.89 61.14 87521.28 2215.11 149677.40 00:29:58.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1417.44 60.91 88089.74 1680.94 183609.52 00:29:58.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1418.53 60.95 88226.74 1560.74 200053.49 00:29:58.892 ======================================================== 00:29:58.892 Total : 14481.36 622.25 87408.09 1560.74 278085.48 00:29:58.892 00:29:58.892 [2024-12-06 01:07:31.409908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:29:58.892 [2024-12-06 01:07:31.410062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017780 is same with the state(6) to be set 00:29:58.892 [2024-12-06 01:07:31.410147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000018180 is same with the state(6) to be set 00:29:58.892 [2024-12-06 01:07:31.410229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000018680 is same with the state(6) to be set 00:29:58.892 [2024-12-06 01:07:31.410310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016d80 is same with the state(6) to be set 00:29:58.892 [2024-12-06 01:07:31.410395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015980 is same with the state(6) to be set 00:29:58.892 [2024-12-06 01:07:31.410479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:29:58.892 [2024-12-06 01:07:31.410563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015e80 is same with the state(6) to be set 00:29:58.892 [2024-12-06 01:07:31.410659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017c80 is same with the state(6) to be set 00:29:58.892 [2024-12-06 01:07:31.410754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017280 is same with the state(6) to be set 00:29:58.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:01.421 01:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:30:02.356 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3059265 00:30:02.356 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:30:02.356 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3059265 00:30:02.356 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:30:02.356 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:02.356 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:30:02.356 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:02.356 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3059265 00:30:02.356 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:30:02.356 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:02.356 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:02.356 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:02.356 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:30:02.356 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:02.356 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:02.356 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:02.357 rmmod nvme_tcp 00:30:02.357 rmmod nvme_fabrics 00:30:02.357 rmmod nvme_keyring 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3058945 ']' 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3058945 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3058945 ']' 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3058945 00:30:02.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3058945) - No such process 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3058945 is not found' 00:30:02.357 Process with pid 3058945 is not found 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:02.357 01:07:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:04.893 00:30:04.893 real 0m13.444s 00:30:04.893 user 0m38.189s 00:30:04.893 sys 0m4.966s 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:04.893 ************************************ 00:30:04.893 END TEST nvmf_shutdown_tc4 00:30:04.893 ************************************ 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:30:04.893 00:30:04.893 real 0m54.925s 00:30:04.893 user 2m49.633s 00:30:04.893 sys 0m13.063s 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:04.893 ************************************ 00:30:04.893 END TEST nvmf_shutdown 00:30:04.893 ************************************ 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:04.893 ************************************ 00:30:04.893 START TEST nvmf_nsid 00:30:04.893 ************************************ 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:04.893 * Looking for test storage... 00:30:04.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:30:04.893 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:04.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.894 --rc genhtml_branch_coverage=1 00:30:04.894 --rc genhtml_function_coverage=1 00:30:04.894 --rc genhtml_legend=1 00:30:04.894 --rc geninfo_all_blocks=1 00:30:04.894 --rc geninfo_unexecuted_blocks=1 00:30:04.894 00:30:04.894 ' 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:04.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.894 --rc genhtml_branch_coverage=1 00:30:04.894 --rc genhtml_function_coverage=1 00:30:04.894 --rc genhtml_legend=1 00:30:04.894 --rc geninfo_all_blocks=1 00:30:04.894 --rc geninfo_unexecuted_blocks=1 00:30:04.894 00:30:04.894 ' 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:04.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.894 --rc genhtml_branch_coverage=1 00:30:04.894 --rc genhtml_function_coverage=1 00:30:04.894 --rc genhtml_legend=1 00:30:04.894 --rc geninfo_all_blocks=1 00:30:04.894 --rc geninfo_unexecuted_blocks=1 00:30:04.894 00:30:04.894 ' 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:04.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.894 --rc genhtml_branch_coverage=1 00:30:04.894 --rc genhtml_function_coverage=1 00:30:04.894 --rc genhtml_legend=1 00:30:04.894 --rc geninfo_all_blocks=1 00:30:04.894 --rc geninfo_unexecuted_blocks=1 00:30:04.894 00:30:04.894 ' 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:04.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:30:04.894 01:07:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.798 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:06.798 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:06.799 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:06.799 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:06.799 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:06.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:06.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:30:06.799 00:30:06.799 --- 10.0.0.2 ping statistics --- 00:30:06.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.799 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:06.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:06.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:30:06.799 00:30:06.799 --- 10.0.0.1 ping statistics --- 00:30:06.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.799 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3062144 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3062144 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3062144 ']' 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:06.799 01:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:07.058 [2024-12-06 01:07:39.545803] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:30:07.058 [2024-12-06 01:07:39.545964] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:07.058 [2024-12-06 01:07:39.699592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.317 [2024-12-06 01:07:39.836040] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:07.317 [2024-12-06 01:07:39.836149] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:07.317 [2024-12-06 01:07:39.836175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:07.317 [2024-12-06 01:07:39.836199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:07.317 [2024-12-06 01:07:39.836218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:07.317 [2024-12-06 01:07:39.837858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3062293 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=ba55b934-ce58-40a7-8d00-cbb02f3d1898 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=a6fe05e0-deee-4877-bbbb-f4ffa8152ecb 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=aeb40421-381d-48b8-8943-50b72e589035 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.883 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:07.883 null0 00:30:07.883 null1 00:30:08.141 null2 00:30:08.141 [2024-12-06 01:07:40.601540] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:08.141 [2024-12-06 01:07:40.625905] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.141 [2024-12-06 01:07:40.661140] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:30:08.141 [2024-12-06 01:07:40.661286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3062293 ] 00:30:08.141 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.141 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3062293 /var/tmp/tgt2.sock 00:30:08.141 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3062293 ']' 00:30:08.141 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:30:08.141 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:08.141 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:30:08.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:30:08.141 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:08.141 01:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:08.141 [2024-12-06 01:07:40.807681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.400 [2024-12-06 01:07:40.936286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.334 01:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:09.334 01:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:09.334 01:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:30:09.592 [2024-12-06 01:07:42.295520] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.851 [2024-12-06 01:07:42.311894] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:30:09.851 nvme0n1 nvme0n2 00:30:09.851 nvme1n1 00:30:09.851 01:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:30:09.851 01:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:30:09.851 01:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:10.418 01:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:30:10.418 01:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:30:10.418 01:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:30:10.418 01:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:30:10.418 01:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:30:10.418 01:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:30:10.418 01:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:30:10.418 01:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:10.418 01:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:10.418 01:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:10.418 01:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:30:10.418 01:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:30:10.418 01:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:30:11.354 01:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:11.354 01:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:11.354 01:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:11.354 01:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:30:11.354 01:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:11.354 01:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid ba55b934-ce58-40a7-8d00-cbb02f3d1898 00:30:11.354 01:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:11.354 01:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:30:11.354 01:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:30:11.354 01:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:30:11.354 01:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:11.354 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ba55b934ce5840a78d00cbb02f3d1898 00:30:11.354 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo BA55B934CE5840A78D00CBB02F3D1898 00:30:11.354 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ BA55B934CE5840A78D00CBB02F3D1898 == \B\A\5\5\B\9\3\4\C\E\5\8\4\0\A\7\8\D\0\0\C\B\B\0\2\F\3\D\1\8\9\8 ]] 00:30:11.354 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:30:11.354 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:11.354 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:11.354 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:30:11.354 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:11.354 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:30:11.354 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:11.354 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid a6fe05e0-deee-4877-bbbb-f4ffa8152ecb 00:30:11.354 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:11.354 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:30:11.354 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:30:11.354 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:30:11.354 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:11.643 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a6fe05e0deee4877bbbbf4ffa8152ecb 00:30:11.643 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A6FE05E0DEEE4877BBBBF4FFA8152ECB 00:30:11.643 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ A6FE05E0DEEE4877BBBBF4FFA8152ECB == \A\6\F\E\0\5\E\0\D\E\E\E\4\8\7\7\B\B\B\B\F\4\F\F\A\8\1\5\2\E\C\B ]] 00:30:11.643 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:30:11.643 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:11.643 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:11.643 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:30:11.643 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:11.643 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:30:11.643 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:11.643 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid aeb40421-381d-48b8-8943-50b72e589035 00:30:11.643 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:11.643 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:30:11.643 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:30:11.643 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:30:11.643 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:11.643 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=aeb40421381d48b8894350b72e589035 00:30:11.644 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo AEB40421381D48B8894350B72E589035 00:30:11.644 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ AEB40421381D48B8894350B72E589035 == \A\E\B\4\0\4\2\1\3\8\1\D\4\8\B\8\8\9\4\3\5\0\B\7\2\E\5\8\9\0\3\5 ]] 00:30:11.644 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:30:11.927 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:30:11.927 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:30:11.927 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3062293 00:30:11.927 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3062293 ']' 00:30:11.927 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3062293 00:30:11.927 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:11.927 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:11.927 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3062293 00:30:11.927 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:11.927 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:11.927 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3062293' 00:30:11.927 killing process with pid 3062293 00:30:11.927 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3062293 00:30:11.927 01:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3062293 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:14.469 rmmod nvme_tcp 00:30:14.469 rmmod nvme_fabrics 00:30:14.469 rmmod nvme_keyring 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3062144 ']' 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3062144 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3062144 ']' 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3062144 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3062144 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3062144' 00:30:14.469 killing process with pid 3062144 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3062144 00:30:14.469 01:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3062144 00:30:15.406 01:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:15.406 01:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:15.406 01:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:15.407 01:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:30:15.407 01:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:30:15.407 01:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:15.407 01:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:30:15.407 01:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:15.407 01:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:15.407 01:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.407 01:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.407 01:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.314 01:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:17.314 00:30:17.314 real 0m12.663s 00:30:17.314 user 0m15.495s 00:30:17.314 sys 0m3.073s 00:30:17.314 01:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:17.314 01:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:17.314 ************************************ 00:30:17.314 END TEST nvmf_nsid 00:30:17.314 ************************************ 00:30:17.314 01:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:17.314 00:30:17.314 real 18m38.327s 00:30:17.314 user 51m9.308s 00:30:17.314 sys 3m38.317s 00:30:17.314 01:07:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:17.314 01:07:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:17.314 ************************************ 00:30:17.314 END TEST nvmf_target_extra 00:30:17.314 ************************************ 00:30:17.314 01:07:49 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:17.314 01:07:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:17.314 01:07:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:17.314 01:07:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:17.314 ************************************ 00:30:17.314 START TEST nvmf_host 00:30:17.314 ************************************ 00:30:17.314 01:07:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:17.314 * Looking for test storage... 00:30:17.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:17.314 01:07:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:17.314 01:07:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:30:17.314 01:07:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:17.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.575 --rc genhtml_branch_coverage=1 00:30:17.575 --rc genhtml_function_coverage=1 00:30:17.575 --rc genhtml_legend=1 00:30:17.575 --rc geninfo_all_blocks=1 00:30:17.575 --rc geninfo_unexecuted_blocks=1 00:30:17.575 00:30:17.575 ' 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:17.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.575 --rc genhtml_branch_coverage=1 00:30:17.575 --rc genhtml_function_coverage=1 00:30:17.575 --rc genhtml_legend=1 00:30:17.575 --rc geninfo_all_blocks=1 00:30:17.575 --rc geninfo_unexecuted_blocks=1 00:30:17.575 00:30:17.575 ' 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:17.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.575 --rc genhtml_branch_coverage=1 00:30:17.575 --rc genhtml_function_coverage=1 00:30:17.575 --rc genhtml_legend=1 00:30:17.575 --rc geninfo_all_blocks=1 00:30:17.575 --rc geninfo_unexecuted_blocks=1 00:30:17.575 00:30:17.575 ' 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:17.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.575 --rc genhtml_branch_coverage=1 00:30:17.575 --rc genhtml_function_coverage=1 00:30:17.575 --rc genhtml_legend=1 00:30:17.575 --rc geninfo_all_blocks=1 00:30:17.575 --rc geninfo_unexecuted_blocks=1 00:30:17.575 00:30:17.575 ' 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:17.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:17.575 01:07:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.575 ************************************ 00:30:17.575 START TEST nvmf_multicontroller 00:30:17.575 ************************************ 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:17.576 * Looking for test storage... 00:30:17.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:17.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.576 --rc genhtml_branch_coverage=1 00:30:17.576 --rc genhtml_function_coverage=1 00:30:17.576 --rc genhtml_legend=1 00:30:17.576 --rc geninfo_all_blocks=1 00:30:17.576 --rc geninfo_unexecuted_blocks=1 00:30:17.576 00:30:17.576 ' 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:17.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.576 --rc genhtml_branch_coverage=1 00:30:17.576 --rc genhtml_function_coverage=1 00:30:17.576 --rc genhtml_legend=1 00:30:17.576 --rc geninfo_all_blocks=1 00:30:17.576 --rc geninfo_unexecuted_blocks=1 00:30:17.576 00:30:17.576 ' 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:17.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.576 --rc genhtml_branch_coverage=1 00:30:17.576 --rc genhtml_function_coverage=1 00:30:17.576 --rc genhtml_legend=1 00:30:17.576 --rc geninfo_all_blocks=1 00:30:17.576 --rc geninfo_unexecuted_blocks=1 00:30:17.576 00:30:17.576 ' 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:17.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.576 --rc genhtml_branch_coverage=1 00:30:17.576 --rc genhtml_function_coverage=1 00:30:17.576 --rc genhtml_legend=1 00:30:17.576 --rc geninfo_all_blocks=1 00:30:17.576 --rc geninfo_unexecuted_blocks=1 00:30:17.576 00:30:17.576 ' 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:17.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:17.576 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:17.577 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:17.577 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:17.577 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:17.577 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:17.577 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:17.577 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:17.577 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:17.577 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:17.577 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:17.577 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:17.577 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.577 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:17.577 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.577 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:17.577 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:17.577 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:30:17.577 01:07:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:20.107 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:20.107 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:20.107 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:20.107 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:20.107 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:20.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:20.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:30:20.108 00:30:20.108 --- 10.0.0.2 ping statistics --- 00:30:20.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.108 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:20.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:20.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:30:20.108 00:30:20.108 --- 10.0.0.1 ping statistics --- 00:30:20.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.108 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3065249 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3065249 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3065249 ']' 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:20.108 01:07:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:20.108 [2024-12-06 01:07:52.491716] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:30:20.108 [2024-12-06 01:07:52.491859] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:20.108 [2024-12-06 01:07:52.644453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:20.108 [2024-12-06 01:07:52.784813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:20.108 [2024-12-06 01:07:52.784926] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:20.108 [2024-12-06 01:07:52.784952] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:20.108 [2024-12-06 01:07:52.784976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:20.108 [2024-12-06 01:07:52.784996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:20.108 [2024-12-06 01:07:52.787735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:20.108 [2024-12-06 01:07:52.787849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.108 [2024-12-06 01:07:52.787855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.041 [2024-12-06 01:07:53.536202] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.041 Malloc0 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.041 [2024-12-06 01:07:53.663428] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.041 [2024-12-06 01:07:53.671282] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.041 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.298 Malloc1 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3065404 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3065404 /var/tmp/bdevperf.sock 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3065404 ']' 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:21.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:21.298 01:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.230 01:07:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.230 01:07:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:22.230 01:07:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:22.230 01:07:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.230 01:07:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.493 NVMe0n1 00:30:22.493 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.493 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:22.493 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:22.493 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.493 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.493 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.493 1 00:30:22.493 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:22.493 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:22.493 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:22.493 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:22.493 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.493 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:22.493 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.493 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:22.493 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.493 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.493 request: 00:30:22.493 { 00:30:22.493 "name": "NVMe0", 00:30:22.493 "trtype": "tcp", 00:30:22.493 "traddr": "10.0.0.2", 00:30:22.493 "adrfam": "ipv4", 00:30:22.493 "trsvcid": "4420", 00:30:22.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:22.493 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:22.493 "hostaddr": "10.0.0.1", 00:30:22.493 "prchk_reftag": false, 00:30:22.493 "prchk_guard": false, 00:30:22.493 "hdgst": false, 00:30:22.493 "ddgst": false, 00:30:22.493 "allow_unrecognized_csi": false, 00:30:22.493 "method": "bdev_nvme_attach_controller", 00:30:22.493 "req_id": 1 00:30:22.493 } 00:30:22.493 Got JSON-RPC error response 00:30:22.493 response: 00:30:22.493 { 00:30:22.493 "code": -114, 00:30:22.493 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:22.493 } 00:30:22.493 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:22.493 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:22.493 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:22.493 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.494 request: 00:30:22.494 { 00:30:22.494 "name": "NVMe0", 00:30:22.494 "trtype": "tcp", 00:30:22.494 "traddr": "10.0.0.2", 00:30:22.494 "adrfam": "ipv4", 00:30:22.494 "trsvcid": "4420", 00:30:22.494 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:22.494 "hostaddr": "10.0.0.1", 00:30:22.494 "prchk_reftag": false, 00:30:22.494 "prchk_guard": false, 00:30:22.494 "hdgst": false, 00:30:22.494 "ddgst": false, 00:30:22.494 "allow_unrecognized_csi": false, 00:30:22.494 "method": "bdev_nvme_attach_controller", 00:30:22.494 "req_id": 1 00:30:22.494 } 00:30:22.494 Got JSON-RPC error response 00:30:22.494 response: 00:30:22.494 { 00:30:22.494 "code": -114, 00:30:22.494 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:22.494 } 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.494 request: 00:30:22.494 { 00:30:22.494 "name": "NVMe0", 00:30:22.494 "trtype": "tcp", 00:30:22.494 "traddr": "10.0.0.2", 00:30:22.494 "adrfam": "ipv4", 00:30:22.494 "trsvcid": "4420", 00:30:22.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:22.494 "hostaddr": "10.0.0.1", 00:30:22.494 "prchk_reftag": false, 00:30:22.494 "prchk_guard": false, 00:30:22.494 "hdgst": false, 00:30:22.494 "ddgst": false, 00:30:22.494 "multipath": "disable", 00:30:22.494 "allow_unrecognized_csi": false, 00:30:22.494 "method": "bdev_nvme_attach_controller", 00:30:22.494 "req_id": 1 00:30:22.494 } 00:30:22.494 Got JSON-RPC error response 00:30:22.494 response: 00:30:22.494 { 00:30:22.494 "code": -114, 00:30:22.494 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:30:22.494 } 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.494 request: 00:30:22.494 { 00:30:22.494 "name": "NVMe0", 00:30:22.494 "trtype": "tcp", 00:30:22.494 "traddr": "10.0.0.2", 00:30:22.494 "adrfam": "ipv4", 00:30:22.494 "trsvcid": "4420", 00:30:22.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:22.494 "hostaddr": "10.0.0.1", 00:30:22.494 "prchk_reftag": false, 00:30:22.494 "prchk_guard": false, 00:30:22.494 "hdgst": false, 00:30:22.494 "ddgst": false, 00:30:22.494 "multipath": "failover", 00:30:22.494 "allow_unrecognized_csi": false, 00:30:22.494 "method": "bdev_nvme_attach_controller", 00:30:22.494 "req_id": 1 00:30:22.494 } 00:30:22.494 Got JSON-RPC error response 00:30:22.494 response: 00:30:22.494 { 00:30:22.494 "code": -114, 00:30:22.494 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:22.494 } 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.494 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.751 NVMe0n1 00:30:22.751 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.751 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:22.751 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.751 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.751 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.751 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:22.751 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.751 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:23.008 00:30:23.008 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.008 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:23.008 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:23.008 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.008 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:23.008 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.008 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:23.008 01:07:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:23.937 { 00:30:23.937 "results": [ 00:30:23.937 { 00:30:23.937 "job": "NVMe0n1", 00:30:23.937 "core_mask": "0x1", 00:30:23.937 "workload": "write", 00:30:23.937 "status": "finished", 00:30:23.938 "queue_depth": 128, 00:30:23.938 "io_size": 4096, 00:30:23.938 "runtime": 1.005781, 00:30:23.938 "iops": 13134.071930171678, 00:30:23.938 "mibps": 51.30496847723312, 00:30:23.938 "io_failed": 0, 00:30:23.938 "io_timeout": 0, 00:30:23.938 "avg_latency_us": 9728.860266801245, 00:30:23.938 "min_latency_us": 4320.521481481482, 00:30:23.938 "max_latency_us": 19320.983703703703 00:30:23.938 } 00:30:23.938 ], 00:30:23.938 "core_count": 1 00:30:23.938 } 00:30:23.938 01:07:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:23.938 01:07:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.938 01:07:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:24.195 01:07:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.195 01:07:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:30:24.195 01:07:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3065404 00:30:24.195 01:07:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3065404 ']' 00:30:24.195 01:07:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3065404 00:30:24.195 01:07:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:24.195 01:07:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:24.195 01:07:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3065404 00:30:24.195 01:07:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:24.195 01:07:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:24.195 01:07:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3065404' 00:30:24.195 killing process with pid 3065404 00:30:24.195 01:07:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3065404 00:30:24.195 01:07:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3065404 00:30:25.130 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:25.130 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.130 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:25.130 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.130 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:25.130 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.130 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:25.130 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.130 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:30:25.130 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:25.130 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:25.130 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:25.130 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:30:25.130 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:30:25.130 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:25.130 [2024-12-06 01:07:53.866437] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:30:25.130 [2024-12-06 01:07:53.866583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3065404 ] 00:30:25.130 [2024-12-06 01:07:54.002232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.130 [2024-12-06 01:07:54.128498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.130 [2024-12-06 01:07:55.479709] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 467d4cd6-1734-44ef-8d48-b889e4ba15b9 already exists 00:30:25.130 [2024-12-06 01:07:55.479776] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:467d4cd6-1734-44ef-8d48-b889e4ba15b9 alias for bdev NVMe1n1 00:30:25.130 [2024-12-06 01:07:55.479823] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:25.130 Running I/O for 1 seconds... 00:30:25.130 13082.00 IOPS, 51.10 MiB/s 00:30:25.130 Latency(us) 00:30:25.130 [2024-12-06T00:07:57.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.130 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:25.130 NVMe0n1 : 1.01 13134.07 51.30 0.00 0.00 9728.86 4320.52 19320.98 00:30:25.130 [2024-12-06T00:07:57.839Z] =================================================================================================================== 00:30:25.130 [2024-12-06T00:07:57.839Z] Total : 13134.07 51.30 0.00 0.00 9728.86 4320.52 19320.98 00:30:25.130 Received shutdown signal, test time was about 1.000000 seconds 00:30:25.130 00:30:25.130 Latency(us) 00:30:25.130 [2024-12-06T00:07:57.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.130 [2024-12-06T00:07:57.839Z] =================================================================================================================== 00:30:25.130 [2024-12-06T00:07:57.839Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:25.130 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:25.131 rmmod nvme_tcp 00:30:25.131 rmmod nvme_fabrics 00:30:25.131 rmmod nvme_keyring 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3065249 ']' 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3065249 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3065249 ']' 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3065249 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3065249 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3065249' 00:30:25.131 killing process with pid 3065249 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3065249 00:30:25.131 01:07:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3065249 00:30:26.507 01:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:26.507 01:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:26.507 01:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:26.507 01:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:30:26.507 01:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:30:26.507 01:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:26.507 01:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:30:26.507 01:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:26.507 01:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:26.507 01:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.507 01:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.507 01:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.412 01:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:28.412 00:30:28.412 real 0m11.020s 00:30:28.412 user 0m23.122s 00:30:28.412 sys 0m2.669s 00:30:28.412 01:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:28.412 01:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:28.412 ************************************ 00:30:28.412 END TEST nvmf_multicontroller 00:30:28.412 ************************************ 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.670 ************************************ 00:30:28.670 START TEST nvmf_aer 00:30:28.670 ************************************ 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:28.670 * Looking for test storage... 00:30:28.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:28.670 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:28.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.671 --rc genhtml_branch_coverage=1 00:30:28.671 --rc genhtml_function_coverage=1 00:30:28.671 --rc genhtml_legend=1 00:30:28.671 --rc geninfo_all_blocks=1 00:30:28.671 --rc geninfo_unexecuted_blocks=1 00:30:28.671 00:30:28.671 ' 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:28.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.671 --rc genhtml_branch_coverage=1 00:30:28.671 --rc genhtml_function_coverage=1 00:30:28.671 --rc genhtml_legend=1 00:30:28.671 --rc geninfo_all_blocks=1 00:30:28.671 --rc geninfo_unexecuted_blocks=1 00:30:28.671 00:30:28.671 ' 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:28.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.671 --rc genhtml_branch_coverage=1 00:30:28.671 --rc genhtml_function_coverage=1 00:30:28.671 --rc genhtml_legend=1 00:30:28.671 --rc geninfo_all_blocks=1 00:30:28.671 --rc geninfo_unexecuted_blocks=1 00:30:28.671 00:30:28.671 ' 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:28.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.671 --rc genhtml_branch_coverage=1 00:30:28.671 --rc genhtml_function_coverage=1 00:30:28.671 --rc genhtml_legend=1 00:30:28.671 --rc geninfo_all_blocks=1 00:30:28.671 --rc geninfo_unexecuted_blocks=1 00:30:28.671 00:30:28.671 ' 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:28.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:28.671 01:08:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:31.199 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:31.199 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:31.199 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:31.200 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:31.200 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:31.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:30:31.200 00:30:31.200 --- 10.0.0.2 ping statistics --- 00:30:31.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.200 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:31.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:30:31.200 00:30:31.200 --- 10.0.0.1 ping statistics --- 00:30:31.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.200 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3067900 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3067900 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3067900 ']' 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:31.200 01:08:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:31.200 [2024-12-06 01:08:03.609761] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:30:31.200 [2024-12-06 01:08:03.609932] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:31.200 [2024-12-06 01:08:03.759455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:31.200 [2024-12-06 01:08:03.885625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:31.200 [2024-12-06 01:08:03.885726] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:31.200 [2024-12-06 01:08:03.885749] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:31.200 [2024-12-06 01:08:03.885770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:31.200 [2024-12-06 01:08:03.885787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:31.200 [2024-12-06 01:08:03.888414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:31.200 [2024-12-06 01:08:03.888480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:31.200 [2024-12-06 01:08:03.888526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.200 [2024-12-06 01:08:03.888547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:32.136 [2024-12-06 01:08:04.605737] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:32.136 Malloc0 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:32.136 [2024-12-06 01:08:04.723948] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:32.136 [ 00:30:32.136 { 00:30:32.136 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:32.136 "subtype": "Discovery", 00:30:32.136 "listen_addresses": [], 00:30:32.136 "allow_any_host": true, 00:30:32.136 "hosts": [] 00:30:32.136 }, 00:30:32.136 { 00:30:32.136 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:32.136 "subtype": "NVMe", 00:30:32.136 "listen_addresses": [ 00:30:32.136 { 00:30:32.136 "trtype": "TCP", 00:30:32.136 "adrfam": "IPv4", 00:30:32.136 "traddr": "10.0.0.2", 00:30:32.136 "trsvcid": "4420" 00:30:32.136 } 00:30:32.136 ], 00:30:32.136 "allow_any_host": true, 00:30:32.136 "hosts": [], 00:30:32.136 "serial_number": "SPDK00000000000001", 00:30:32.136 "model_number": "SPDK bdev Controller", 00:30:32.136 "max_namespaces": 2, 00:30:32.136 "min_cntlid": 1, 00:30:32.136 "max_cntlid": 65519, 00:30:32.136 "namespaces": [ 00:30:32.136 { 00:30:32.136 "nsid": 1, 00:30:32.136 "bdev_name": "Malloc0", 00:30:32.136 "name": "Malloc0", 00:30:32.136 "nguid": "066D5D36204A419A8C6722EED09E0EF0", 00:30:32.136 "uuid": "066d5d36-204a-419a-8c67-22eed09e0ef0" 00:30:32.136 } 00:30:32.136 ] 00:30:32.136 } 00:30:32.136 ] 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:32.136 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3068124 00:30:32.137 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:32.137 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:32.137 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:30:32.137 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:32.137 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:30:32.137 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:30:32.137 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:32.395 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:32.396 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:30:32.396 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:30:32.396 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:32.396 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:32.396 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:30:32.396 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:30:32.396 01:08:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:32.396 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:32.396 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 3 -lt 200 ']' 00:30:32.396 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=4 00:30:32.396 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:32.653 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:32.653 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:32.653 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:30:32.653 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:32.653 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.653 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:32.653 Malloc1 00:30:32.653 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.653 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:32.653 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.653 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:32.653 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.653 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:32.653 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.653 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:32.653 [ 00:30:32.653 { 00:30:32.653 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:32.653 "subtype": "Discovery", 00:30:32.653 "listen_addresses": [], 00:30:32.653 "allow_any_host": true, 00:30:32.653 "hosts": [] 00:30:32.653 }, 00:30:32.653 { 00:30:32.653 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:32.653 "subtype": "NVMe", 00:30:32.653 "listen_addresses": [ 00:30:32.653 { 00:30:32.653 "trtype": "TCP", 00:30:32.653 "adrfam": "IPv4", 00:30:32.653 "traddr": "10.0.0.2", 00:30:32.653 "trsvcid": "4420" 00:30:32.653 } 00:30:32.653 ], 00:30:32.653 "allow_any_host": true, 00:30:32.653 "hosts": [], 00:30:32.653 "serial_number": "SPDK00000000000001", 00:30:32.653 "model_number": "SPDK bdev Controller", 00:30:32.653 "max_namespaces": 2, 00:30:32.653 "min_cntlid": 1, 00:30:32.653 "max_cntlid": 65519, 00:30:32.653 "namespaces": [ 00:30:32.653 { 00:30:32.653 "nsid": 1, 00:30:32.653 "bdev_name": "Malloc0", 00:30:32.653 "name": "Malloc0", 00:30:32.653 "nguid": "066D5D36204A419A8C6722EED09E0EF0", 00:30:32.653 "uuid": "066d5d36-204a-419a-8c67-22eed09e0ef0" 00:30:32.653 }, 00:30:32.654 { 00:30:32.654 "nsid": 2, 00:30:32.654 "bdev_name": "Malloc1", 00:30:32.654 "name": "Malloc1", 00:30:32.654 "nguid": "4BA07002E4EB4611952AF783DDC6BB90", 00:30:32.654 "uuid": "4ba07002-e4eb-4611-952a-f783ddc6bb90" 00:30:32.654 } 00:30:32.654 ] 00:30:32.654 } 00:30:32.654 ] 00:30:32.654 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.654 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3068124 00:30:32.911 Asynchronous Event Request test 00:30:32.911 Attaching to 10.0.0.2 00:30:32.911 Attached to 10.0.0.2 00:30:32.911 Registering asynchronous event callbacks... 00:30:32.911 Starting namespace attribute notice tests for all controllers... 00:30:32.911 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:32.911 aer_cb - Changed Namespace 00:30:32.911 Cleaning up... 00:30:32.911 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:32.911 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.911 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:32.911 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.911 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:32.911 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.911 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:33.168 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.168 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:33.168 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.168 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:33.168 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.168 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:33.168 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:33.168 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:33.168 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:33.168 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:33.168 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:33.168 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:33.168 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:33.169 rmmod nvme_tcp 00:30:33.169 rmmod nvme_fabrics 00:30:33.169 rmmod nvme_keyring 00:30:33.169 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:33.169 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:33.169 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:33.169 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3067900 ']' 00:30:33.169 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3067900 00:30:33.169 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3067900 ']' 00:30:33.169 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3067900 00:30:33.169 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:30:33.169 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:33.169 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3067900 00:30:33.169 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:33.169 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:33.169 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3067900' 00:30:33.169 killing process with pid 3067900 00:30:33.169 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3067900 00:30:33.169 01:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3067900 00:30:34.543 01:08:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:34.543 01:08:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:34.543 01:08:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:34.543 01:08:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:30:34.543 01:08:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:30:34.543 01:08:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:34.543 01:08:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:30:34.543 01:08:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:34.543 01:08:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:34.543 01:08:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.543 01:08:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:34.543 01:08:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.446 01:08:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:36.446 00:30:36.446 real 0m7.851s 00:30:36.446 user 0m11.984s 00:30:36.446 sys 0m2.339s 00:30:36.446 01:08:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:36.446 01:08:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:36.446 ************************************ 00:30:36.446 END TEST nvmf_aer 00:30:36.446 ************************************ 00:30:36.446 01:08:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:36.446 01:08:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:36.446 01:08:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:36.446 01:08:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.446 ************************************ 00:30:36.446 START TEST nvmf_async_init 00:30:36.446 ************************************ 00:30:36.446 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:36.446 * Looking for test storage... 00:30:36.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:36.446 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:36.446 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:30:36.446 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:36.705 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:36.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.706 --rc genhtml_branch_coverage=1 00:30:36.706 --rc genhtml_function_coverage=1 00:30:36.706 --rc genhtml_legend=1 00:30:36.706 --rc geninfo_all_blocks=1 00:30:36.706 --rc geninfo_unexecuted_blocks=1 00:30:36.706 00:30:36.706 ' 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:36.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.706 --rc genhtml_branch_coverage=1 00:30:36.706 --rc genhtml_function_coverage=1 00:30:36.706 --rc genhtml_legend=1 00:30:36.706 --rc geninfo_all_blocks=1 00:30:36.706 --rc geninfo_unexecuted_blocks=1 00:30:36.706 00:30:36.706 ' 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:36.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.706 --rc genhtml_branch_coverage=1 00:30:36.706 --rc genhtml_function_coverage=1 00:30:36.706 --rc genhtml_legend=1 00:30:36.706 --rc geninfo_all_blocks=1 00:30:36.706 --rc geninfo_unexecuted_blocks=1 00:30:36.706 00:30:36.706 ' 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:36.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.706 --rc genhtml_branch_coverage=1 00:30:36.706 --rc genhtml_function_coverage=1 00:30:36.706 --rc genhtml_legend=1 00:30:36.706 --rc geninfo_all_blocks=1 00:30:36.706 --rc geninfo_unexecuted_blocks=1 00:30:36.706 00:30:36.706 ' 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:36.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f3f604e991f74905bd10dd2bfffc66da 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:36.706 01:08:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:38.608 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:38.608 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:38.609 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:38.609 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:38.609 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:38.609 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:38.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:38.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:30:38.609 00:30:38.609 --- 10.0.0.2 ping statistics --- 00:30:38.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.609 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:38.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:38.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:30:38.609 00:30:38.609 --- 10.0.0.1 ping statistics --- 00:30:38.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.609 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:38.609 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:38.610 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:38.610 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:38.868 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:38.868 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:38.868 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:38.868 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:38.868 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3070246 00:30:38.868 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:38.868 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3070246 00:30:38.868 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3070246 ']' 00:30:38.868 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.868 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:38.868 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.868 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:38.868 01:08:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:38.868 [2024-12-06 01:08:11.419286] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:30:38.868 [2024-12-06 01:08:11.419438] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:38.868 [2024-12-06 01:08:11.571194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.128 [2024-12-06 01:08:11.709444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.128 [2024-12-06 01:08:11.709542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.128 [2024-12-06 01:08:11.709569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.128 [2024-12-06 01:08:11.709594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.128 [2024-12-06 01:08:11.709615] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.128 [2024-12-06 01:08:11.711247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.062 [2024-12-06 01:08:12.503245] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.062 null0 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f3f604e991f74905bd10dd2bfffc66da 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:40.062 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.063 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.063 [2024-12-06 01:08:12.543534] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.063 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.063 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:40.063 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.063 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.321 nvme0n1 00:30:40.321 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.321 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:40.321 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.321 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.321 [ 00:30:40.321 { 00:30:40.321 "name": "nvme0n1", 00:30:40.321 "aliases": [ 00:30:40.321 "f3f604e9-91f7-4905-bd10-dd2bfffc66da" 00:30:40.321 ], 00:30:40.321 "product_name": "NVMe disk", 00:30:40.321 "block_size": 512, 00:30:40.321 "num_blocks": 2097152, 00:30:40.321 "uuid": "f3f604e9-91f7-4905-bd10-dd2bfffc66da", 00:30:40.321 "numa_id": 0, 00:30:40.321 "assigned_rate_limits": { 00:30:40.321 "rw_ios_per_sec": 0, 00:30:40.321 "rw_mbytes_per_sec": 0, 00:30:40.321 "r_mbytes_per_sec": 0, 00:30:40.321 "w_mbytes_per_sec": 0 00:30:40.321 }, 00:30:40.321 "claimed": false, 00:30:40.321 "zoned": false, 00:30:40.321 "supported_io_types": { 00:30:40.321 "read": true, 00:30:40.321 "write": true, 00:30:40.321 "unmap": false, 00:30:40.321 "flush": true, 00:30:40.321 "reset": true, 00:30:40.321 "nvme_admin": true, 00:30:40.321 "nvme_io": true, 00:30:40.321 "nvme_io_md": false, 00:30:40.321 "write_zeroes": true, 00:30:40.321 "zcopy": false, 00:30:40.321 "get_zone_info": false, 00:30:40.321 "zone_management": false, 00:30:40.321 "zone_append": false, 00:30:40.321 "compare": true, 00:30:40.321 "compare_and_write": true, 00:30:40.321 "abort": true, 00:30:40.321 "seek_hole": false, 00:30:40.321 "seek_data": false, 00:30:40.321 "copy": true, 00:30:40.321 "nvme_iov_md": false 00:30:40.321 }, 00:30:40.321 "memory_domains": [ 00:30:40.321 { 00:30:40.321 "dma_device_id": "system", 00:30:40.321 "dma_device_type": 1 00:30:40.321 } 00:30:40.321 ], 00:30:40.321 "driver_specific": { 00:30:40.321 "nvme": [ 00:30:40.321 { 00:30:40.321 "trid": { 00:30:40.321 "trtype": "TCP", 00:30:40.321 "adrfam": "IPv4", 00:30:40.321 "traddr": "10.0.0.2", 00:30:40.321 "trsvcid": "4420", 00:30:40.321 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:40.322 }, 00:30:40.322 "ctrlr_data": { 00:30:40.322 "cntlid": 1, 00:30:40.322 "vendor_id": "0x8086", 00:30:40.322 "model_number": "SPDK bdev Controller", 00:30:40.322 "serial_number": "00000000000000000000", 00:30:40.322 "firmware_revision": "25.01", 00:30:40.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:40.322 "oacs": { 00:30:40.322 "security": 0, 00:30:40.322 "format": 0, 00:30:40.322 "firmware": 0, 00:30:40.322 "ns_manage": 0 00:30:40.322 }, 00:30:40.322 "multi_ctrlr": true, 00:30:40.322 "ana_reporting": false 00:30:40.322 }, 00:30:40.322 "vs": { 00:30:40.322 "nvme_version": "1.3" 00:30:40.322 }, 00:30:40.322 "ns_data": { 00:30:40.322 "id": 1, 00:30:40.322 "can_share": true 00:30:40.322 } 00:30:40.322 } 00:30:40.322 ], 00:30:40.322 "mp_policy": "active_passive" 00:30:40.322 } 00:30:40.322 } 00:30:40.322 ] 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.322 [2024-12-06 01:08:12.796349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:40.322 [2024-12-06 01:08:12.796466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:30:40.322 [2024-12-06 01:08:12.939059] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.322 [ 00:30:40.322 { 00:30:40.322 "name": "nvme0n1", 00:30:40.322 "aliases": [ 00:30:40.322 "f3f604e9-91f7-4905-bd10-dd2bfffc66da" 00:30:40.322 ], 00:30:40.322 "product_name": "NVMe disk", 00:30:40.322 "block_size": 512, 00:30:40.322 "num_blocks": 2097152, 00:30:40.322 "uuid": "f3f604e9-91f7-4905-bd10-dd2bfffc66da", 00:30:40.322 "numa_id": 0, 00:30:40.322 "assigned_rate_limits": { 00:30:40.322 "rw_ios_per_sec": 0, 00:30:40.322 "rw_mbytes_per_sec": 0, 00:30:40.322 "r_mbytes_per_sec": 0, 00:30:40.322 "w_mbytes_per_sec": 0 00:30:40.322 }, 00:30:40.322 "claimed": false, 00:30:40.322 "zoned": false, 00:30:40.322 "supported_io_types": { 00:30:40.322 "read": true, 00:30:40.322 "write": true, 00:30:40.322 "unmap": false, 00:30:40.322 "flush": true, 00:30:40.322 "reset": true, 00:30:40.322 "nvme_admin": true, 00:30:40.322 "nvme_io": true, 00:30:40.322 "nvme_io_md": false, 00:30:40.322 "write_zeroes": true, 00:30:40.322 "zcopy": false, 00:30:40.322 "get_zone_info": false, 00:30:40.322 "zone_management": false, 00:30:40.322 "zone_append": false, 00:30:40.322 "compare": true, 00:30:40.322 "compare_and_write": true, 00:30:40.322 "abort": true, 00:30:40.322 "seek_hole": false, 00:30:40.322 "seek_data": false, 00:30:40.322 "copy": true, 00:30:40.322 "nvme_iov_md": false 00:30:40.322 }, 00:30:40.322 "memory_domains": [ 00:30:40.322 { 00:30:40.322 "dma_device_id": "system", 00:30:40.322 "dma_device_type": 1 00:30:40.322 } 00:30:40.322 ], 00:30:40.322 "driver_specific": { 00:30:40.322 "nvme": [ 00:30:40.322 { 00:30:40.322 "trid": { 00:30:40.322 "trtype": "TCP", 00:30:40.322 "adrfam": "IPv4", 00:30:40.322 "traddr": "10.0.0.2", 00:30:40.322 "trsvcid": "4420", 00:30:40.322 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:40.322 }, 00:30:40.322 "ctrlr_data": { 00:30:40.322 "cntlid": 2, 00:30:40.322 "vendor_id": "0x8086", 00:30:40.322 "model_number": "SPDK bdev Controller", 00:30:40.322 "serial_number": "00000000000000000000", 00:30:40.322 "firmware_revision": "25.01", 00:30:40.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:40.322 "oacs": { 00:30:40.322 "security": 0, 00:30:40.322 "format": 0, 00:30:40.322 "firmware": 0, 00:30:40.322 "ns_manage": 0 00:30:40.322 }, 00:30:40.322 "multi_ctrlr": true, 00:30:40.322 "ana_reporting": false 00:30:40.322 }, 00:30:40.322 "vs": { 00:30:40.322 "nvme_version": "1.3" 00:30:40.322 }, 00:30:40.322 "ns_data": { 00:30:40.322 "id": 1, 00:30:40.322 "can_share": true 00:30:40.322 } 00:30:40.322 } 00:30:40.322 ], 00:30:40.322 "mp_policy": "active_passive" 00:30:40.322 } 00:30:40.322 } 00:30:40.322 ] 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.gX8pw62SD1 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.gX8pw62SD1 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.gX8pw62SD1 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.322 01:08:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.322 [2024-12-06 01:08:12.997126] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:40.322 [2024-12-06 01:08:12.997378] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:40.322 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.322 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:30:40.322 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.322 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.322 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.322 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:40.322 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.322 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.322 [2024-12-06 01:08:13.013145] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:40.581 nvme0n1 00:30:40.581 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.581 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:40.581 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.581 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.581 [ 00:30:40.581 { 00:30:40.581 "name": "nvme0n1", 00:30:40.581 "aliases": [ 00:30:40.581 "f3f604e9-91f7-4905-bd10-dd2bfffc66da" 00:30:40.581 ], 00:30:40.581 "product_name": "NVMe disk", 00:30:40.581 "block_size": 512, 00:30:40.581 "num_blocks": 2097152, 00:30:40.581 "uuid": "f3f604e9-91f7-4905-bd10-dd2bfffc66da", 00:30:40.581 "numa_id": 0, 00:30:40.581 "assigned_rate_limits": { 00:30:40.581 "rw_ios_per_sec": 0, 00:30:40.581 "rw_mbytes_per_sec": 0, 00:30:40.581 "r_mbytes_per_sec": 0, 00:30:40.581 "w_mbytes_per_sec": 0 00:30:40.581 }, 00:30:40.581 "claimed": false, 00:30:40.581 "zoned": false, 00:30:40.581 "supported_io_types": { 00:30:40.581 "read": true, 00:30:40.581 "write": true, 00:30:40.581 "unmap": false, 00:30:40.581 "flush": true, 00:30:40.581 "reset": true, 00:30:40.581 "nvme_admin": true, 00:30:40.581 "nvme_io": true, 00:30:40.581 "nvme_io_md": false, 00:30:40.581 "write_zeroes": true, 00:30:40.581 "zcopy": false, 00:30:40.581 "get_zone_info": false, 00:30:40.581 "zone_management": false, 00:30:40.581 "zone_append": false, 00:30:40.581 "compare": true, 00:30:40.581 "compare_and_write": true, 00:30:40.581 "abort": true, 00:30:40.581 "seek_hole": false, 00:30:40.581 "seek_data": false, 00:30:40.581 "copy": true, 00:30:40.581 "nvme_iov_md": false 00:30:40.581 }, 00:30:40.581 "memory_domains": [ 00:30:40.581 { 00:30:40.581 "dma_device_id": "system", 00:30:40.581 "dma_device_type": 1 00:30:40.581 } 00:30:40.581 ], 00:30:40.581 "driver_specific": { 00:30:40.581 "nvme": [ 00:30:40.581 { 00:30:40.581 "trid": { 00:30:40.581 "trtype": "TCP", 00:30:40.581 "adrfam": "IPv4", 00:30:40.581 "traddr": "10.0.0.2", 00:30:40.581 "trsvcid": "4421", 00:30:40.581 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:40.581 }, 00:30:40.581 "ctrlr_data": { 00:30:40.581 "cntlid": 3, 00:30:40.581 "vendor_id": "0x8086", 00:30:40.581 "model_number": "SPDK bdev Controller", 00:30:40.581 "serial_number": "00000000000000000000", 00:30:40.581 "firmware_revision": "25.01", 00:30:40.581 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:40.581 "oacs": { 00:30:40.581 "security": 0, 00:30:40.581 "format": 0, 00:30:40.581 "firmware": 0, 00:30:40.581 "ns_manage": 0 00:30:40.581 }, 00:30:40.581 "multi_ctrlr": true, 00:30:40.581 "ana_reporting": false 00:30:40.581 }, 00:30:40.581 "vs": { 00:30:40.582 "nvme_version": "1.3" 00:30:40.582 }, 00:30:40.582 "ns_data": { 00:30:40.582 "id": 1, 00:30:40.582 "can_share": true 00:30:40.582 } 00:30:40.582 } 00:30:40.582 ], 00:30:40.582 "mp_policy": "active_passive" 00:30:40.582 } 00:30:40.582 } 00:30:40.582 ] 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.gX8pw62SD1 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:40.582 rmmod nvme_tcp 00:30:40.582 rmmod nvme_fabrics 00:30:40.582 rmmod nvme_keyring 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3070246 ']' 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3070246 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3070246 ']' 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3070246 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3070246 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3070246' 00:30:40.582 killing process with pid 3070246 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3070246 00:30:40.582 01:08:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3070246 00:30:41.955 01:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:41.955 01:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:41.955 01:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:41.955 01:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:30:41.955 01:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:30:41.955 01:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:41.955 01:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:30:41.955 01:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:41.955 01:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:41.955 01:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.955 01:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.955 01:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.863 01:08:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:43.863 00:30:43.863 real 0m7.341s 00:30:43.863 user 0m4.147s 00:30:43.863 sys 0m1.974s 00:30:43.863 01:08:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:43.863 01:08:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:43.863 ************************************ 00:30:43.863 END TEST nvmf_async_init 00:30:43.863 ************************************ 00:30:43.863 01:08:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:43.863 01:08:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:43.863 01:08:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:43.863 01:08:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.863 ************************************ 00:30:43.863 START TEST dma 00:30:43.863 ************************************ 00:30:43.863 01:08:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:43.863 * Looking for test storage... 00:30:43.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:43.863 01:08:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:43.863 01:08:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:30:43.863 01:08:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:44.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.122 --rc genhtml_branch_coverage=1 00:30:44.122 --rc genhtml_function_coverage=1 00:30:44.122 --rc genhtml_legend=1 00:30:44.122 --rc geninfo_all_blocks=1 00:30:44.122 --rc geninfo_unexecuted_blocks=1 00:30:44.122 00:30:44.122 ' 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:44.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.122 --rc genhtml_branch_coverage=1 00:30:44.122 --rc genhtml_function_coverage=1 00:30:44.122 --rc genhtml_legend=1 00:30:44.122 --rc geninfo_all_blocks=1 00:30:44.122 --rc geninfo_unexecuted_blocks=1 00:30:44.122 00:30:44.122 ' 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:44.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.122 --rc genhtml_branch_coverage=1 00:30:44.122 --rc genhtml_function_coverage=1 00:30:44.122 --rc genhtml_legend=1 00:30:44.122 --rc geninfo_all_blocks=1 00:30:44.122 --rc geninfo_unexecuted_blocks=1 00:30:44.122 00:30:44.122 ' 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:44.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.122 --rc genhtml_branch_coverage=1 00:30:44.122 --rc genhtml_function_coverage=1 00:30:44.122 --rc genhtml_legend=1 00:30:44.122 --rc geninfo_all_blocks=1 00:30:44.122 --rc geninfo_unexecuted_blocks=1 00:30:44.122 00:30:44.122 ' 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:44.122 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:44.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:30:44.123 00:30:44.123 real 0m0.154s 00:30:44.123 user 0m0.098s 00:30:44.123 sys 0m0.064s 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:44.123 ************************************ 00:30:44.123 END TEST dma 00:30:44.123 ************************************ 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.123 ************************************ 00:30:44.123 START TEST nvmf_identify 00:30:44.123 ************************************ 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:44.123 * Looking for test storage... 00:30:44.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:44.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.123 --rc genhtml_branch_coverage=1 00:30:44.123 --rc genhtml_function_coverage=1 00:30:44.123 --rc genhtml_legend=1 00:30:44.123 --rc geninfo_all_blocks=1 00:30:44.123 --rc geninfo_unexecuted_blocks=1 00:30:44.123 00:30:44.123 ' 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:44.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.123 --rc genhtml_branch_coverage=1 00:30:44.123 --rc genhtml_function_coverage=1 00:30:44.123 --rc genhtml_legend=1 00:30:44.123 --rc geninfo_all_blocks=1 00:30:44.123 --rc geninfo_unexecuted_blocks=1 00:30:44.123 00:30:44.123 ' 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:44.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.123 --rc genhtml_branch_coverage=1 00:30:44.123 --rc genhtml_function_coverage=1 00:30:44.123 --rc genhtml_legend=1 00:30:44.123 --rc geninfo_all_blocks=1 00:30:44.123 --rc geninfo_unexecuted_blocks=1 00:30:44.123 00:30:44.123 ' 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:44.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.123 --rc genhtml_branch_coverage=1 00:30:44.123 --rc genhtml_function_coverage=1 00:30:44.123 --rc genhtml_legend=1 00:30:44.123 --rc geninfo_all_blocks=1 00:30:44.123 --rc geninfo_unexecuted_blocks=1 00:30:44.123 00:30:44.123 ' 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.123 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:44.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:44.124 01:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:46.026 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:46.026 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:46.026 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:46.026 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:46.027 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:46.027 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:46.284 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:46.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:46.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:30:46.285 00:30:46.285 --- 10.0.0.2 ping statistics --- 00:30:46.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.285 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:46.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:46.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:30:46.285 00:30:46.285 --- 10.0.0.1 ping statistics --- 00:30:46.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.285 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3072642 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3072642 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3072642 ']' 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:46.285 01:08:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.285 [2024-12-06 01:08:18.986316] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:30:46.285 [2024-12-06 01:08:18.986487] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:46.543 [2024-12-06 01:08:19.154076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:46.801 [2024-12-06 01:08:19.298840] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:46.801 [2024-12-06 01:08:19.298932] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:46.801 [2024-12-06 01:08:19.298961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:46.801 [2024-12-06 01:08:19.298985] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:46.801 [2024-12-06 01:08:19.299006] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:46.801 [2024-12-06 01:08:19.301863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.801 [2024-12-06 01:08:19.301916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:46.801 [2024-12-06 01:08:19.302008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.801 [2024-12-06 01:08:19.302014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:47.365 01:08:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:47.366 01:08:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:30:47.366 01:08:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:47.366 01:08:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.366 01:08:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:47.366 [2024-12-06 01:08:19.982843] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:47.366 01:08:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.366 01:08:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:47.366 01:08:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:47.366 01:08:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:47.366 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:47.366 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.366 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:47.623 Malloc0 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:47.623 [2024-12-06 01:08:20.126539] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:47.623 [ 00:30:47.623 { 00:30:47.623 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:47.623 "subtype": "Discovery", 00:30:47.623 "listen_addresses": [ 00:30:47.623 { 00:30:47.623 "trtype": "TCP", 00:30:47.623 "adrfam": "IPv4", 00:30:47.623 "traddr": "10.0.0.2", 00:30:47.623 "trsvcid": "4420" 00:30:47.623 } 00:30:47.623 ], 00:30:47.623 "allow_any_host": true, 00:30:47.623 "hosts": [] 00:30:47.623 }, 00:30:47.623 { 00:30:47.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:47.623 "subtype": "NVMe", 00:30:47.623 "listen_addresses": [ 00:30:47.623 { 00:30:47.623 "trtype": "TCP", 00:30:47.623 "adrfam": "IPv4", 00:30:47.623 "traddr": "10.0.0.2", 00:30:47.623 "trsvcid": "4420" 00:30:47.623 } 00:30:47.623 ], 00:30:47.623 "allow_any_host": true, 00:30:47.623 "hosts": [], 00:30:47.623 "serial_number": "SPDK00000000000001", 00:30:47.623 "model_number": "SPDK bdev Controller", 00:30:47.623 "max_namespaces": 32, 00:30:47.623 "min_cntlid": 1, 00:30:47.623 "max_cntlid": 65519, 00:30:47.623 "namespaces": [ 00:30:47.623 { 00:30:47.623 "nsid": 1, 00:30:47.623 "bdev_name": "Malloc0", 00:30:47.623 "name": "Malloc0", 00:30:47.623 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:47.623 "eui64": "ABCDEF0123456789", 00:30:47.623 "uuid": "4a5d2d16-d571-4b0b-93f0-aa27c2445432" 00:30:47.623 } 00:30:47.623 ] 00:30:47.623 } 00:30:47.623 ] 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.623 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:47.623 [2024-12-06 01:08:20.193406] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:30:47.623 [2024-12-06 01:08:20.193512] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072799 ] 00:30:47.623 [2024-12-06 01:08:20.270991] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:30:47.623 [2024-12-06 01:08:20.271132] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:47.623 [2024-12-06 01:08:20.271155] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:47.623 [2024-12-06 01:08:20.271184] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:47.623 [2024-12-06 01:08:20.271207] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:47.623 [2024-12-06 01:08:20.275378] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:30:47.623 [2024-12-06 01:08:20.275485] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:47.623 [2024-12-06 01:08:20.282841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:47.623 [2024-12-06 01:08:20.282872] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:47.623 [2024-12-06 01:08:20.282888] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:47.623 [2024-12-06 01:08:20.282898] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:47.623 [2024-12-06 01:08:20.282974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.623 [2024-12-06 01:08:20.282994] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.283007] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:47.624 [2024-12-06 01:08:20.283053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:47.624 [2024-12-06 01:08:20.283095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:47.624 [2024-12-06 01:08:20.290843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.624 [2024-12-06 01:08:20.290871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.624 [2024-12-06 01:08:20.290889] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.290904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:47.624 [2024-12-06 01:08:20.290941] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:47.624 [2024-12-06 01:08:20.290971] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:30:47.624 [2024-12-06 01:08:20.290989] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:30:47.624 [2024-12-06 01:08:20.291017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.291032] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.291049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:47.624 [2024-12-06 01:08:20.291071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.624 [2024-12-06 01:08:20.291117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:47.624 [2024-12-06 01:08:20.291265] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.624 [2024-12-06 01:08:20.291288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.624 [2024-12-06 01:08:20.291302] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.291314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:47.624 [2024-12-06 01:08:20.291335] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:30:47.624 [2024-12-06 01:08:20.291359] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:30:47.624 [2024-12-06 01:08:20.291402] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.291417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.291429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:47.624 [2024-12-06 01:08:20.291453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.624 [2024-12-06 01:08:20.291488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:47.624 [2024-12-06 01:08:20.291633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.624 [2024-12-06 01:08:20.291654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.624 [2024-12-06 01:08:20.291666] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.291677] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:47.624 [2024-12-06 01:08:20.291701] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:30:47.624 [2024-12-06 01:08:20.291724] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:47.624 [2024-12-06 01:08:20.291744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.291757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.291772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:47.624 [2024-12-06 01:08:20.291796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.624 [2024-12-06 01:08:20.291837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:47.624 [2024-12-06 01:08:20.291978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.624 [2024-12-06 01:08:20.291999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.624 [2024-12-06 01:08:20.292011] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.292022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:47.624 [2024-12-06 01:08:20.292041] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:47.624 [2024-12-06 01:08:20.292068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.292084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.292095] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:47.624 [2024-12-06 01:08:20.292114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.624 [2024-12-06 01:08:20.292145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:47.624 [2024-12-06 01:08:20.292280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.624 [2024-12-06 01:08:20.292302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.624 [2024-12-06 01:08:20.292314] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.292324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:47.624 [2024-12-06 01:08:20.292339] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:47.624 [2024-12-06 01:08:20.292353] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:47.624 [2024-12-06 01:08:20.292385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:47.624 [2024-12-06 01:08:20.292503] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:30:47.624 [2024-12-06 01:08:20.292517] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:47.624 [2024-12-06 01:08:20.292538] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.292551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.292577] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:47.624 [2024-12-06 01:08:20.292596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.624 [2024-12-06 01:08:20.292627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:47.624 [2024-12-06 01:08:20.292782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.624 [2024-12-06 01:08:20.292803] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.624 [2024-12-06 01:08:20.292821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.292834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:47.624 [2024-12-06 01:08:20.292849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:47.624 [2024-12-06 01:08:20.292880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.292901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.292914] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:47.624 [2024-12-06 01:08:20.292933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.624 [2024-12-06 01:08:20.292965] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:47.624 [2024-12-06 01:08:20.293130] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.624 [2024-12-06 01:08:20.293152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.624 [2024-12-06 01:08:20.293167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.293179] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:47.624 [2024-12-06 01:08:20.293193] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:47.624 [2024-12-06 01:08:20.293207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:47.624 [2024-12-06 01:08:20.293228] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:30:47.624 [2024-12-06 01:08:20.293256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:47.624 [2024-12-06 01:08:20.293284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.293298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:47.624 [2024-12-06 01:08:20.293325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.624 [2024-12-06 01:08:20.293372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:47.624 [2024-12-06 01:08:20.293618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:47.624 [2024-12-06 01:08:20.293640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:47.624 [2024-12-06 01:08:20.293652] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.293664] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:47.624 [2024-12-06 01:08:20.293677] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:47.624 [2024-12-06 01:08:20.293689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.293723] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:47.624 [2024-12-06 01:08:20.293740] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:47.883 [2024-12-06 01:08:20.335841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.883 [2024-12-06 01:08:20.335871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.883 [2024-12-06 01:08:20.335892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.883 [2024-12-06 01:08:20.335906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:47.883 [2024-12-06 01:08:20.335932] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:30:47.883 [2024-12-06 01:08:20.335949] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:30:47.883 [2024-12-06 01:08:20.335962] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:30:47.883 [2024-12-06 01:08:20.335990] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:30:47.883 [2024-12-06 01:08:20.336013] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:30:47.883 [2024-12-06 01:08:20.336032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:30:47.883 [2024-12-06 01:08:20.336059] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:47.883 [2024-12-06 01:08:20.336085] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.883 [2024-12-06 01:08:20.336101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.883 [2024-12-06 01:08:20.336117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:47.883 [2024-12-06 01:08:20.336145] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:47.883 [2024-12-06 01:08:20.336197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:47.883 [2024-12-06 01:08:20.336386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.883 [2024-12-06 01:08:20.336408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.883 [2024-12-06 01:08:20.336420] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.883 [2024-12-06 01:08:20.336431] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:47.883 [2024-12-06 01:08:20.336450] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.883 [2024-12-06 01:08:20.336471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.883 [2024-12-06 01:08:20.336482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:47.883 [2024-12-06 01:08:20.336501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:47.883 [2024-12-06 01:08:20.336519] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.883 [2024-12-06 01:08:20.336531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.883 [2024-12-06 01:08:20.336541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:47.883 [2024-12-06 01:08:20.336556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:47.883 [2024-12-06 01:08:20.336576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.883 [2024-12-06 01:08:20.336589] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.883 [2024-12-06 01:08:20.336599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:47.883 [2024-12-06 01:08:20.336615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:47.883 [2024-12-06 01:08:20.336630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.883 [2024-12-06 01:08:20.336642] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.883 [2024-12-06 01:08:20.336651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.883 [2024-12-06 01:08:20.336683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:47.883 [2024-12-06 01:08:20.336697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:47.883 [2024-12-06 01:08:20.336741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:47.883 [2024-12-06 01:08:20.336762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.883 [2024-12-06 01:08:20.336774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:47.883 [2024-12-06 01:08:20.336812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.883 [2024-12-06 01:08:20.336872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:47.883 [2024-12-06 01:08:20.336891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:47.883 [2024-12-06 01:08:20.336903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:47.883 [2024-12-06 01:08:20.336915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.883 [2024-12-06 01:08:20.336927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:47.883 [2024-12-06 01:08:20.337086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.883 [2024-12-06 01:08:20.337108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.883 [2024-12-06 01:08:20.337120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.883 [2024-12-06 01:08:20.337131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:47.883 [2024-12-06 01:08:20.337147] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:30:47.883 [2024-12-06 01:08:20.337169] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:30:47.883 [2024-12-06 01:08:20.337202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.883 [2024-12-06 01:08:20.337219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:47.883 [2024-12-06 01:08:20.337243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.883 [2024-12-06 01:08:20.337277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:47.883 [2024-12-06 01:08:20.337453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:47.884 [2024-12-06 01:08:20.337476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:47.884 [2024-12-06 01:08:20.337489] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.337500] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:47.884 [2024-12-06 01:08:20.337520] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:47.884 [2024-12-06 01:08:20.337532] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.337564] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.337582] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.337608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.884 [2024-12-06 01:08:20.337626] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.884 [2024-12-06 01:08:20.337638] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.337649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:47.884 [2024-12-06 01:08:20.337684] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:30:47.884 [2024-12-06 01:08:20.337748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.337765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:47.884 [2024-12-06 01:08:20.337800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.884 [2024-12-06 01:08:20.337834] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.337864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.337882] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:47.884 [2024-12-06 01:08:20.337901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:47.884 [2024-12-06 01:08:20.337939] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:47.884 [2024-12-06 01:08:20.337958] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:47.884 [2024-12-06 01:08:20.338217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:47.884 [2024-12-06 01:08:20.338244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:47.884 [2024-12-06 01:08:20.338261] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.338274] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=1024, cccid=4 00:30:47.884 [2024-12-06 01:08:20.338286] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=1024 00:30:47.884 [2024-12-06 01:08:20.338298] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.338320] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.338334] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.338349] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.884 [2024-12-06 01:08:20.338364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.884 [2024-12-06 01:08:20.338379] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.338392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:47.884 [2024-12-06 01:08:20.382856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.884 [2024-12-06 01:08:20.382888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.884 [2024-12-06 01:08:20.382901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.382912] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:47.884 [2024-12-06 01:08:20.382957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.382976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:47.884 [2024-12-06 01:08:20.382999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.884 [2024-12-06 01:08:20.383043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:47.884 [2024-12-06 01:08:20.383209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:47.884 [2024-12-06 01:08:20.383231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:47.884 [2024-12-06 01:08:20.383242] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.383253] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=3072, cccid=4 00:30:47.884 [2024-12-06 01:08:20.383265] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=3072 00:30:47.884 [2024-12-06 01:08:20.383276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.383308] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.383324] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.423910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.884 [2024-12-06 01:08:20.423938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.884 [2024-12-06 01:08:20.423964] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.423976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:47.884 [2024-12-06 01:08:20.424005] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.424022] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:47.884 [2024-12-06 01:08:20.424050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.884 [2024-12-06 01:08:20.424094] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:47.884 [2024-12-06 01:08:20.424243] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:47.884 [2024-12-06 01:08:20.424264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:47.884 [2024-12-06 01:08:20.424280] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.424292] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8, cccid=4 00:30:47.884 [2024-12-06 01:08:20.424304] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=8 00:30:47.884 [2024-12-06 01:08:20.424315] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.424335] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.424349] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.464931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.884 [2024-12-06 01:08:20.464959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.884 [2024-12-06 01:08:20.464971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.884 [2024-12-06 01:08:20.464983] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:47.884 ===================================================== 00:30:47.884 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:47.884 ===================================================== 00:30:47.884 Controller Capabilities/Features 00:30:47.884 ================================ 00:30:47.884 Vendor ID: 0000 00:30:47.884 Subsystem Vendor ID: 0000 00:30:47.884 Serial Number: .................... 00:30:47.884 Model Number: ........................................ 00:30:47.884 Firmware Version: 25.01 00:30:47.884 Recommended Arb Burst: 0 00:30:47.884 IEEE OUI Identifier: 00 00 00 00:30:47.884 Multi-path I/O 00:30:47.884 May have multiple subsystem ports: No 00:30:47.884 May have multiple controllers: No 00:30:47.884 Associated with SR-IOV VF: No 00:30:47.884 Max Data Transfer Size: 131072 00:30:47.884 Max Number of Namespaces: 0 00:30:47.884 Max Number of I/O Queues: 1024 00:30:47.884 NVMe Specification Version (VS): 1.3 00:30:47.884 NVMe Specification Version (Identify): 1.3 00:30:47.884 Maximum Queue Entries: 128 00:30:47.884 Contiguous Queues Required: Yes 00:30:47.884 Arbitration Mechanisms Supported 00:30:47.884 Weighted Round Robin: Not Supported 00:30:47.884 Vendor Specific: Not Supported 00:30:47.884 Reset Timeout: 15000 ms 00:30:47.884 Doorbell Stride: 4 bytes 00:30:47.884 NVM Subsystem Reset: Not Supported 00:30:47.884 Command Sets Supported 00:30:47.884 NVM Command Set: Supported 00:30:47.884 Boot Partition: Not Supported 00:30:47.884 Memory Page Size Minimum: 4096 bytes 00:30:47.884 Memory Page Size Maximum: 4096 bytes 00:30:47.884 Persistent Memory Region: Not Supported 00:30:47.884 Optional Asynchronous Events Supported 00:30:47.884 Namespace Attribute Notices: Not Supported 00:30:47.884 Firmware Activation Notices: Not Supported 00:30:47.884 ANA Change Notices: Not Supported 00:30:47.884 PLE Aggregate Log Change Notices: Not Supported 00:30:47.884 LBA Status Info Alert Notices: Not Supported 00:30:47.884 EGE Aggregate Log Change Notices: Not Supported 00:30:47.884 Normal NVM Subsystem Shutdown event: Not Supported 00:30:47.884 Zone Descriptor Change Notices: Not Supported 00:30:47.884 Discovery Log Change Notices: Supported 00:30:47.884 Controller Attributes 00:30:47.884 128-bit Host Identifier: Not Supported 00:30:47.884 Non-Operational Permissive Mode: Not Supported 00:30:47.884 NVM Sets: Not Supported 00:30:47.884 Read Recovery Levels: Not Supported 00:30:47.884 Endurance Groups: Not Supported 00:30:47.884 Predictable Latency Mode: Not Supported 00:30:47.884 Traffic Based Keep ALive: Not Supported 00:30:47.884 Namespace Granularity: Not Supported 00:30:47.884 SQ Associations: Not Supported 00:30:47.884 UUID List: Not Supported 00:30:47.884 Multi-Domain Subsystem: Not Supported 00:30:47.884 Fixed Capacity Management: Not Supported 00:30:47.884 Variable Capacity Management: Not Supported 00:30:47.884 Delete Endurance Group: Not Supported 00:30:47.884 Delete NVM Set: Not Supported 00:30:47.884 Extended LBA Formats Supported: Not Supported 00:30:47.884 Flexible Data Placement Supported: Not Supported 00:30:47.885 00:30:47.885 Controller Memory Buffer Support 00:30:47.885 ================================ 00:30:47.885 Supported: No 00:30:47.885 00:30:47.885 Persistent Memory Region Support 00:30:47.885 ================================ 00:30:47.885 Supported: No 00:30:47.885 00:30:47.885 Admin Command Set Attributes 00:30:47.885 ============================ 00:30:47.885 Security Send/Receive: Not Supported 00:30:47.885 Format NVM: Not Supported 00:30:47.885 Firmware Activate/Download: Not Supported 00:30:47.885 Namespace Management: Not Supported 00:30:47.885 Device Self-Test: Not Supported 00:30:47.885 Directives: Not Supported 00:30:47.885 NVMe-MI: Not Supported 00:30:47.885 Virtualization Management: Not Supported 00:30:47.885 Doorbell Buffer Config: Not Supported 00:30:47.885 Get LBA Status Capability: Not Supported 00:30:47.885 Command & Feature Lockdown Capability: Not Supported 00:30:47.885 Abort Command Limit: 1 00:30:47.885 Async Event Request Limit: 4 00:30:47.885 Number of Firmware Slots: N/A 00:30:47.885 Firmware Slot 1 Read-Only: N/A 00:30:47.885 Firmware Activation Without Reset: N/A 00:30:47.885 Multiple Update Detection Support: N/A 00:30:47.885 Firmware Update Granularity: No Information Provided 00:30:47.885 Per-Namespace SMART Log: No 00:30:47.885 Asymmetric Namespace Access Log Page: Not Supported 00:30:47.885 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:47.885 Command Effects Log Page: Not Supported 00:30:47.885 Get Log Page Extended Data: Supported 00:30:47.885 Telemetry Log Pages: Not Supported 00:30:47.885 Persistent Event Log Pages: Not Supported 00:30:47.885 Supported Log Pages Log Page: May Support 00:30:47.885 Commands Supported & Effects Log Page: Not Supported 00:30:47.885 Feature Identifiers & Effects Log Page:May Support 00:30:47.885 NVMe-MI Commands & Effects Log Page: May Support 00:30:47.885 Data Area 4 for Telemetry Log: Not Supported 00:30:47.885 Error Log Page Entries Supported: 128 00:30:47.885 Keep Alive: Not Supported 00:30:47.885 00:30:47.885 NVM Command Set Attributes 00:30:47.885 ========================== 00:30:47.885 Submission Queue Entry Size 00:30:47.885 Max: 1 00:30:47.885 Min: 1 00:30:47.885 Completion Queue Entry Size 00:30:47.885 Max: 1 00:30:47.885 Min: 1 00:30:47.885 Number of Namespaces: 0 00:30:47.885 Compare Command: Not Supported 00:30:47.885 Write Uncorrectable Command: Not Supported 00:30:47.885 Dataset Management Command: Not Supported 00:30:47.885 Write Zeroes Command: Not Supported 00:30:47.885 Set Features Save Field: Not Supported 00:30:47.885 Reservations: Not Supported 00:30:47.885 Timestamp: Not Supported 00:30:47.885 Copy: Not Supported 00:30:47.885 Volatile Write Cache: Not Present 00:30:47.885 Atomic Write Unit (Normal): 1 00:30:47.885 Atomic Write Unit (PFail): 1 00:30:47.885 Atomic Compare & Write Unit: 1 00:30:47.885 Fused Compare & Write: Supported 00:30:47.885 Scatter-Gather List 00:30:47.885 SGL Command Set: Supported 00:30:47.885 SGL Keyed: Supported 00:30:47.885 SGL Bit Bucket Descriptor: Not Supported 00:30:47.885 SGL Metadata Pointer: Not Supported 00:30:47.885 Oversized SGL: Not Supported 00:30:47.885 SGL Metadata Address: Not Supported 00:30:47.885 SGL Offset: Supported 00:30:47.885 Transport SGL Data Block: Not Supported 00:30:47.885 Replay Protected Memory Block: Not Supported 00:30:47.885 00:30:47.885 Firmware Slot Information 00:30:47.885 ========================= 00:30:47.885 Active slot: 0 00:30:47.885 00:30:47.885 00:30:47.885 Error Log 00:30:47.885 ========= 00:30:47.885 00:30:47.885 Active Namespaces 00:30:47.885 ================= 00:30:47.885 Discovery Log Page 00:30:47.885 ================== 00:30:47.885 Generation Counter: 2 00:30:47.885 Number of Records: 2 00:30:47.885 Record Format: 0 00:30:47.885 00:30:47.885 Discovery Log Entry 0 00:30:47.885 ---------------------- 00:30:47.885 Transport Type: 3 (TCP) 00:30:47.885 Address Family: 1 (IPv4) 00:30:47.885 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:47.885 Entry Flags: 00:30:47.885 Duplicate Returned Information: 1 00:30:47.885 Explicit Persistent Connection Support for Discovery: 1 00:30:47.885 Transport Requirements: 00:30:47.885 Secure Channel: Not Required 00:30:47.885 Port ID: 0 (0x0000) 00:30:47.885 Controller ID: 65535 (0xffff) 00:30:47.885 Admin Max SQ Size: 128 00:30:47.885 Transport Service Identifier: 4420 00:30:47.885 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:47.885 Transport Address: 10.0.0.2 00:30:47.885 Discovery Log Entry 1 00:30:47.885 ---------------------- 00:30:47.885 Transport Type: 3 (TCP) 00:30:47.885 Address Family: 1 (IPv4) 00:30:47.885 Subsystem Type: 2 (NVM Subsystem) 00:30:47.885 Entry Flags: 00:30:47.885 Duplicate Returned Information: 0 00:30:47.885 Explicit Persistent Connection Support for Discovery: 0 00:30:47.885 Transport Requirements: 00:30:47.885 Secure Channel: Not Required 00:30:47.885 Port ID: 0 (0x0000) 00:30:47.885 Controller ID: 65535 (0xffff) 00:30:47.885 Admin Max SQ Size: 128 00:30:47.885 Transport Service Identifier: 4420 00:30:47.885 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:47.885 Transport Address: 10.0.0.2 [2024-12-06 01:08:20.465166] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:30:47.885 [2024-12-06 01:08:20.465202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:47.885 [2024-12-06 01:08:20.465241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.885 [2024-12-06 01:08:20.465256] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:47.885 [2024-12-06 01:08:20.465269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.885 [2024-12-06 01:08:20.465281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:47.885 [2024-12-06 01:08:20.465294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.885 [2024-12-06 01:08:20.465306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.885 [2024-12-06 01:08:20.465318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.885 [2024-12-06 01:08:20.465344] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.885 [2024-12-06 01:08:20.465359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.885 [2024-12-06 01:08:20.465371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.885 [2024-12-06 01:08:20.465391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.885 [2024-12-06 01:08:20.465444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.885 [2024-12-06 01:08:20.465565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.885 [2024-12-06 01:08:20.465587] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.885 [2024-12-06 01:08:20.465599] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.885 [2024-12-06 01:08:20.465611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.885 [2024-12-06 01:08:20.465632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.885 [2024-12-06 01:08:20.465646] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.885 [2024-12-06 01:08:20.465658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.885 [2024-12-06 01:08:20.465677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.885 [2024-12-06 01:08:20.465718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.885 [2024-12-06 01:08:20.469833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.886 [2024-12-06 01:08:20.469858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.886 [2024-12-06 01:08:20.469871] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.886 [2024-12-06 01:08:20.469881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.886 [2024-12-06 01:08:20.469896] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:30:47.886 [2024-12-06 01:08:20.469917] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:30:47.886 [2024-12-06 01:08:20.469960] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.886 [2024-12-06 01:08:20.469976] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.886 [2024-12-06 01:08:20.469988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.886 [2024-12-06 01:08:20.470008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.886 [2024-12-06 01:08:20.470043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.886 [2024-12-06 01:08:20.470157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.886 [2024-12-06 01:08:20.470180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.886 [2024-12-06 01:08:20.470192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.886 [2024-12-06 01:08:20.470202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.886 [2024-12-06 01:08:20.470225] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 0 milliseconds 00:30:47.886 00:30:47.886 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:47.886 [2024-12-06 01:08:20.569714] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:30:47.886 [2024-12-06 01:08:20.569835] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072804 ] 00:30:48.146 [2024-12-06 01:08:20.649112] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:30:48.146 [2024-12-06 01:08:20.649249] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:48.146 [2024-12-06 01:08:20.649272] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:48.146 [2024-12-06 01:08:20.649304] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:48.146 [2024-12-06 01:08:20.649328] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:48.146 [2024-12-06 01:08:20.650184] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:30:48.146 [2024-12-06 01:08:20.650279] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:48.146 [2024-12-06 01:08:20.656126] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:48.146 [2024-12-06 01:08:20.656162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:48.146 [2024-12-06 01:08:20.656179] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:48.146 [2024-12-06 01:08:20.656194] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:48.146 [2024-12-06 01:08:20.656276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.146 [2024-12-06 01:08:20.656300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.146 [2024-12-06 01:08:20.656318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:48.146 [2024-12-06 01:08:20.656353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:48.146 [2024-12-06 01:08:20.656413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:48.146 [2024-12-06 01:08:20.663843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.146 [2024-12-06 01:08:20.663870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.146 [2024-12-06 01:08:20.663884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.146 [2024-12-06 01:08:20.663897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:48.146 [2024-12-06 01:08:20.663945] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:48.146 [2024-12-06 01:08:20.663981] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:30:48.146 [2024-12-06 01:08:20.664000] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:30:48.146 [2024-12-06 01:08:20.664031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.146 [2024-12-06 01:08:20.664048] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.146 [2024-12-06 01:08:20.664066] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:48.146 [2024-12-06 01:08:20.664088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.146 [2024-12-06 01:08:20.664147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:48.146 [2024-12-06 01:08:20.664291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.146 [2024-12-06 01:08:20.664314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.146 [2024-12-06 01:08:20.664327] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.146 [2024-12-06 01:08:20.664339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:48.146 [2024-12-06 01:08:20.664360] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:30:48.146 [2024-12-06 01:08:20.664385] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:30:48.146 [2024-12-06 01:08:20.664411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.146 [2024-12-06 01:08:20.664427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.146 [2024-12-06 01:08:20.664439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:48.146 [2024-12-06 01:08:20.664465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.146 [2024-12-06 01:08:20.664505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:48.146 [2024-12-06 01:08:20.664627] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.146 [2024-12-06 01:08:20.664651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.146 [2024-12-06 01:08:20.664664] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.146 [2024-12-06 01:08:20.664676] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:48.146 [2024-12-06 01:08:20.664691] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:30:48.146 [2024-12-06 01:08:20.664717] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:48.146 [2024-12-06 01:08:20.664744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.146 [2024-12-06 01:08:20.664764] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.146 [2024-12-06 01:08:20.664778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:48.146 [2024-12-06 01:08:20.664804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.146 [2024-12-06 01:08:20.664853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:48.146 [2024-12-06 01:08:20.664954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.146 [2024-12-06 01:08:20.664983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.146 [2024-12-06 01:08:20.664996] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.146 [2024-12-06 01:08:20.665007] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:48.146 [2024-12-06 01:08:20.665022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:48.146 [2024-12-06 01:08:20.665052] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.146 [2024-12-06 01:08:20.665069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.146 [2024-12-06 01:08:20.665081] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:48.146 [2024-12-06 01:08:20.665112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.146 [2024-12-06 01:08:20.665145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:48.146 [2024-12-06 01:08:20.665241] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.146 [2024-12-06 01:08:20.665262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.146 [2024-12-06 01:08:20.665274] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.146 [2024-12-06 01:08:20.665285] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:48.146 [2024-12-06 01:08:20.665300] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:48.146 [2024-12-06 01:08:20.665314] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:48.146 [2024-12-06 01:08:20.665336] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:48.147 [2024-12-06 01:08:20.665460] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:30:48.147 [2024-12-06 01:08:20.665475] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:48.147 [2024-12-06 01:08:20.665496] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.665515] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.665543] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:48.147 [2024-12-06 01:08:20.665563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.147 [2024-12-06 01:08:20.665596] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:48.147 [2024-12-06 01:08:20.665735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.147 [2024-12-06 01:08:20.665762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.147 [2024-12-06 01:08:20.665776] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.665787] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:48.147 [2024-12-06 01:08:20.665820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:48.147 [2024-12-06 01:08:20.665853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.665869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.665888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:48.147 [2024-12-06 01:08:20.665908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.147 [2024-12-06 01:08:20.665941] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:48.147 [2024-12-06 01:08:20.666057] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.147 [2024-12-06 01:08:20.666078] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.147 [2024-12-06 01:08:20.666090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.666101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:48.147 [2024-12-06 01:08:20.666115] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:48.147 [2024-12-06 01:08:20.666130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:48.147 [2024-12-06 01:08:20.666152] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:30:48.147 [2024-12-06 01:08:20.666178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:48.147 [2024-12-06 01:08:20.666211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.666227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:48.147 [2024-12-06 01:08:20.666247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.147 [2024-12-06 01:08:20.666280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:48.147 [2024-12-06 01:08:20.666454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:48.147 [2024-12-06 01:08:20.666475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:48.147 [2024-12-06 01:08:20.666488] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.666506] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:48.147 [2024-12-06 01:08:20.666520] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:48.147 [2024-12-06 01:08:20.666533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.666571] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.666589] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.666614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.147 [2024-12-06 01:08:20.666633] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.147 [2024-12-06 01:08:20.666645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.666656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:48.147 [2024-12-06 01:08:20.666680] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:30:48.147 [2024-12-06 01:08:20.666697] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:30:48.147 [2024-12-06 01:08:20.666710] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:30:48.147 [2024-12-06 01:08:20.666727] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:30:48.147 [2024-12-06 01:08:20.666741] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:30:48.147 [2024-12-06 01:08:20.666763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:30:48.147 [2024-12-06 01:08:20.666806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:48.147 [2024-12-06 01:08:20.666855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.666893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.666907] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:48.147 [2024-12-06 01:08:20.666931] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:48.147 [2024-12-06 01:08:20.666967] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:48.147 [2024-12-06 01:08:20.667091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.147 [2024-12-06 01:08:20.667113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.147 [2024-12-06 01:08:20.667125] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.667136] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:48.147 [2024-12-06 01:08:20.667156] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.667170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.667182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:48.147 [2024-12-06 01:08:20.667207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.147 [2024-12-06 01:08:20.667231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.667244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.667255] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:48.147 [2024-12-06 01:08:20.667271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.147 [2024-12-06 01:08:20.667291] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.667305] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.667316] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:48.147 [2024-12-06 01:08:20.667332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.147 [2024-12-06 01:08:20.667362] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.667375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.667385] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:48.147 [2024-12-06 01:08:20.667401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.147 [2024-12-06 01:08:20.667430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:48.147 [2024-12-06 01:08:20.667460] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:48.147 [2024-12-06 01:08:20.667495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.667509] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:48.147 [2024-12-06 01:08:20.667536] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.147 [2024-12-06 01:08:20.667572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:48.147 [2024-12-06 01:08:20.667606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:48.147 [2024-12-06 01:08:20.667619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:48.147 [2024-12-06 01:08:20.667631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:48.147 [2024-12-06 01:08:20.667644] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:48.147 [2024-12-06 01:08:20.667793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.147 [2024-12-06 01:08:20.671840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.147 [2024-12-06 01:08:20.671861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.671873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:48.147 [2024-12-06 01:08:20.671888] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:30:48.147 [2024-12-06 01:08:20.671903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:48.147 [2024-12-06 01:08:20.671941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:30:48.147 [2024-12-06 01:08:20.671961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:48.147 [2024-12-06 01:08:20.671986] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.672005] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.147 [2024-12-06 01:08:20.672018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:48.147 [2024-12-06 01:08:20.672038] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:48.147 [2024-12-06 01:08:20.672073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:48.147 [2024-12-06 01:08:20.672190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.148 [2024-12-06 01:08:20.672219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.148 [2024-12-06 01:08:20.672233] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.672244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:48.148 [2024-12-06 01:08:20.672347] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:30:48.148 [2024-12-06 01:08:20.672388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:48.148 [2024-12-06 01:08:20.672417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.672432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:48.148 [2024-12-06 01:08:20.672453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.148 [2024-12-06 01:08:20.672507] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:48.148 [2024-12-06 01:08:20.672678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:48.148 [2024-12-06 01:08:20.672700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:48.148 [2024-12-06 01:08:20.672712] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.672727] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:48.148 [2024-12-06 01:08:20.672740] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:48.148 [2024-12-06 01:08:20.672756] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.672790] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.672812] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.712929] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.148 [2024-12-06 01:08:20.712963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.148 [2024-12-06 01:08:20.712977] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.712989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:48.148 [2024-12-06 01:08:20.713036] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:30:48.148 [2024-12-06 01:08:20.713068] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:30:48.148 [2024-12-06 01:08:20.713122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:30:48.148 [2024-12-06 01:08:20.713151] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.713172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:48.148 [2024-12-06 01:08:20.713208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.148 [2024-12-06 01:08:20.713249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:48.148 [2024-12-06 01:08:20.713430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:48.148 [2024-12-06 01:08:20.713452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:48.148 [2024-12-06 01:08:20.713464] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.713475] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:48.148 [2024-12-06 01:08:20.713488] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:48.148 [2024-12-06 01:08:20.713500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.713533] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.713549] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.753917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.148 [2024-12-06 01:08:20.753945] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.148 [2024-12-06 01:08:20.753958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.753970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:48.148 [2024-12-06 01:08:20.754012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:48.148 [2024-12-06 01:08:20.754044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:48.148 [2024-12-06 01:08:20.754073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.754094] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:48.148 [2024-12-06 01:08:20.754123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.148 [2024-12-06 01:08:20.754164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:48.148 [2024-12-06 01:08:20.754297] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:48.148 [2024-12-06 01:08:20.754320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:48.148 [2024-12-06 01:08:20.754345] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.754356] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:48.148 [2024-12-06 01:08:20.754369] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:48.148 [2024-12-06 01:08:20.754380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.754410] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.754426] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.798839] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.148 [2024-12-06 01:08:20.798883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.148 [2024-12-06 01:08:20.798897] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.798909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:48.148 [2024-12-06 01:08:20.798943] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:48.148 [2024-12-06 01:08:20.798971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:30:48.148 [2024-12-06 01:08:20.798998] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:30:48.148 [2024-12-06 01:08:20.799016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:48.148 [2024-12-06 01:08:20.799031] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:48.148 [2024-12-06 01:08:20.799050] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:30:48.148 [2024-12-06 01:08:20.799076] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:30:48.148 [2024-12-06 01:08:20.799090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:30:48.148 [2024-12-06 01:08:20.799104] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:30:48.148 [2024-12-06 01:08:20.799177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.799207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:48.148 [2024-12-06 01:08:20.799233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.148 [2024-12-06 01:08:20.799273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.799288] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.799299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:48.148 [2024-12-06 01:08:20.799317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.148 [2024-12-06 01:08:20.799354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:48.148 [2024-12-06 01:08:20.799389] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:48.148 [2024-12-06 01:08:20.799524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.148 [2024-12-06 01:08:20.799546] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.148 [2024-12-06 01:08:20.799560] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.799572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:48.148 [2024-12-06 01:08:20.799598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.148 [2024-12-06 01:08:20.799615] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.148 [2024-12-06 01:08:20.799626] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.799637] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:48.148 [2024-12-06 01:08:20.799662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.799678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:48.148 [2024-12-06 01:08:20.799696] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.148 [2024-12-06 01:08:20.799729] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:48.148 [2024-12-06 01:08:20.799846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.148 [2024-12-06 01:08:20.799869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.148 [2024-12-06 01:08:20.799882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.799892] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:48.148 [2024-12-06 01:08:20.799918] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.148 [2024-12-06 01:08:20.799934] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:48.148 [2024-12-06 01:08:20.799952] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.148 [2024-12-06 01:08:20.799984] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:48.148 [2024-12-06 01:08:20.800083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.149 [2024-12-06 01:08:20.800104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.149 [2024-12-06 01:08:20.800116] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.800126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:48.149 [2024-12-06 01:08:20.800151] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.800166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:48.149 [2024-12-06 01:08:20.800185] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.149 [2024-12-06 01:08:20.800216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:48.149 [2024-12-06 01:08:20.800318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.149 [2024-12-06 01:08:20.800338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.149 [2024-12-06 01:08:20.800350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.800361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:48.149 [2024-12-06 01:08:20.800403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.800422] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:48.149 [2024-12-06 01:08:20.800442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.149 [2024-12-06 01:08:20.800472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.800489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:48.149 [2024-12-06 01:08:20.800507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.149 [2024-12-06 01:08:20.800530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.800545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000015700) 00:30:48.149 [2024-12-06 01:08:20.800563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.149 [2024-12-06 01:08:20.800608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.800624] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:48.149 [2024-12-06 01:08:20.800667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.149 [2024-12-06 01:08:20.800701] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:48.149 [2024-12-06 01:08:20.800734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:48.149 [2024-12-06 01:08:20.800747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:30:48.149 [2024-12-06 01:08:20.800759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:48.149 [2024-12-06 01:08:20.801035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:48.149 [2024-12-06 01:08:20.801060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:48.149 [2024-12-06 01:08:20.801073] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.801085] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8192, cccid=5 00:30:48.149 [2024-12-06 01:08:20.801099] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000015700): expected_datao=0, payload_size=8192 00:30:48.149 [2024-12-06 01:08:20.801121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.801157] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.801174] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.801207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:48.149 [2024-12-06 01:08:20.801225] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:48.149 [2024-12-06 01:08:20.801237] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.801248] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=4 00:30:48.149 [2024-12-06 01:08:20.801260] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:48.149 [2024-12-06 01:08:20.801271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.801287] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.801300] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.801314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:48.149 [2024-12-06 01:08:20.801344] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:48.149 [2024-12-06 01:08:20.801356] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.801365] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=6 00:30:48.149 [2024-12-06 01:08:20.801377] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:48.149 [2024-12-06 01:08:20.801392] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.801408] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.801420] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.801434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:48.149 [2024-12-06 01:08:20.801448] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:48.149 [2024-12-06 01:08:20.801459] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.801469] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=7 00:30:48.149 [2024-12-06 01:08:20.801480] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:48.149 [2024-12-06 01:08:20.801490] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.801506] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.801518] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.841917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.149 [2024-12-06 01:08:20.841945] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.149 [2024-12-06 01:08:20.841958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.841978] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:48.149 [2024-12-06 01:08:20.842016] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.149 [2024-12-06 01:08:20.842039] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.149 [2024-12-06 01:08:20.842052] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.842063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:48.149 [2024-12-06 01:08:20.842087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.149 [2024-12-06 01:08:20.842105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.149 [2024-12-06 01:08:20.842116] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.842127] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000015700 00:30:48.149 [2024-12-06 01:08:20.842146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.149 [2024-12-06 01:08:20.842163] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.149 [2024-12-06 01:08:20.842174] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.149 [2024-12-06 01:08:20.842199] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:48.149 ===================================================== 00:30:48.149 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:48.149 ===================================================== 00:30:48.149 Controller Capabilities/Features 00:30:48.149 ================================ 00:30:48.149 Vendor ID: 8086 00:30:48.149 Subsystem Vendor ID: 8086 00:30:48.149 Serial Number: SPDK00000000000001 00:30:48.149 Model Number: SPDK bdev Controller 00:30:48.149 Firmware Version: 25.01 00:30:48.149 Recommended Arb Burst: 6 00:30:48.149 IEEE OUI Identifier: e4 d2 5c 00:30:48.149 Multi-path I/O 00:30:48.149 May have multiple subsystem ports: Yes 00:30:48.149 May have multiple controllers: Yes 00:30:48.149 Associated with SR-IOV VF: No 00:30:48.149 Max Data Transfer Size: 131072 00:30:48.149 Max Number of Namespaces: 32 00:30:48.149 Max Number of I/O Queues: 127 00:30:48.149 NVMe Specification Version (VS): 1.3 00:30:48.149 NVMe Specification Version (Identify): 1.3 00:30:48.149 Maximum Queue Entries: 128 00:30:48.149 Contiguous Queues Required: Yes 00:30:48.149 Arbitration Mechanisms Supported 00:30:48.149 Weighted Round Robin: Not Supported 00:30:48.149 Vendor Specific: Not Supported 00:30:48.149 Reset Timeout: 15000 ms 00:30:48.149 Doorbell Stride: 4 bytes 00:30:48.149 NVM Subsystem Reset: Not Supported 00:30:48.149 Command Sets Supported 00:30:48.149 NVM Command Set: Supported 00:30:48.149 Boot Partition: Not Supported 00:30:48.149 Memory Page Size Minimum: 4096 bytes 00:30:48.149 Memory Page Size Maximum: 4096 bytes 00:30:48.149 Persistent Memory Region: Not Supported 00:30:48.149 Optional Asynchronous Events Supported 00:30:48.149 Namespace Attribute Notices: Supported 00:30:48.149 Firmware Activation Notices: Not Supported 00:30:48.149 ANA Change Notices: Not Supported 00:30:48.149 PLE Aggregate Log Change Notices: Not Supported 00:30:48.149 LBA Status Info Alert Notices: Not Supported 00:30:48.149 EGE Aggregate Log Change Notices: Not Supported 00:30:48.149 Normal NVM Subsystem Shutdown event: Not Supported 00:30:48.149 Zone Descriptor Change Notices: Not Supported 00:30:48.149 Discovery Log Change Notices: Not Supported 00:30:48.149 Controller Attributes 00:30:48.149 128-bit Host Identifier: Supported 00:30:48.149 Non-Operational Permissive Mode: Not Supported 00:30:48.149 NVM Sets: Not Supported 00:30:48.149 Read Recovery Levels: Not Supported 00:30:48.149 Endurance Groups: Not Supported 00:30:48.150 Predictable Latency Mode: Not Supported 00:30:48.150 Traffic Based Keep ALive: Not Supported 00:30:48.150 Namespace Granularity: Not Supported 00:30:48.150 SQ Associations: Not Supported 00:30:48.150 UUID List: Not Supported 00:30:48.150 Multi-Domain Subsystem: Not Supported 00:30:48.150 Fixed Capacity Management: Not Supported 00:30:48.150 Variable Capacity Management: Not Supported 00:30:48.150 Delete Endurance Group: Not Supported 00:30:48.150 Delete NVM Set: Not Supported 00:30:48.150 Extended LBA Formats Supported: Not Supported 00:30:48.150 Flexible Data Placement Supported: Not Supported 00:30:48.150 00:30:48.150 Controller Memory Buffer Support 00:30:48.150 ================================ 00:30:48.150 Supported: No 00:30:48.150 00:30:48.150 Persistent Memory Region Support 00:30:48.150 ================================ 00:30:48.150 Supported: No 00:30:48.150 00:30:48.150 Admin Command Set Attributes 00:30:48.150 ============================ 00:30:48.150 Security Send/Receive: Not Supported 00:30:48.150 Format NVM: Not Supported 00:30:48.150 Firmware Activate/Download: Not Supported 00:30:48.150 Namespace Management: Not Supported 00:30:48.150 Device Self-Test: Not Supported 00:30:48.150 Directives: Not Supported 00:30:48.150 NVMe-MI: Not Supported 00:30:48.150 Virtualization Management: Not Supported 00:30:48.150 Doorbell Buffer Config: Not Supported 00:30:48.150 Get LBA Status Capability: Not Supported 00:30:48.150 Command & Feature Lockdown Capability: Not Supported 00:30:48.150 Abort Command Limit: 4 00:30:48.150 Async Event Request Limit: 4 00:30:48.150 Number of Firmware Slots: N/A 00:30:48.150 Firmware Slot 1 Read-Only: N/A 00:30:48.150 Firmware Activation Without Reset: N/A 00:30:48.150 Multiple Update Detection Support: N/A 00:30:48.150 Firmware Update Granularity: No Information Provided 00:30:48.150 Per-Namespace SMART Log: No 00:30:48.150 Asymmetric Namespace Access Log Page: Not Supported 00:30:48.150 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:48.150 Command Effects Log Page: Supported 00:30:48.150 Get Log Page Extended Data: Supported 00:30:48.150 Telemetry Log Pages: Not Supported 00:30:48.150 Persistent Event Log Pages: Not Supported 00:30:48.150 Supported Log Pages Log Page: May Support 00:30:48.150 Commands Supported & Effects Log Page: Not Supported 00:30:48.150 Feature Identifiers & Effects Log Page:May Support 00:30:48.150 NVMe-MI Commands & Effects Log Page: May Support 00:30:48.150 Data Area 4 for Telemetry Log: Not Supported 00:30:48.150 Error Log Page Entries Supported: 128 00:30:48.150 Keep Alive: Supported 00:30:48.150 Keep Alive Granularity: 10000 ms 00:30:48.150 00:30:48.150 NVM Command Set Attributes 00:30:48.150 ========================== 00:30:48.150 Submission Queue Entry Size 00:30:48.150 Max: 64 00:30:48.150 Min: 64 00:30:48.150 Completion Queue Entry Size 00:30:48.150 Max: 16 00:30:48.150 Min: 16 00:30:48.150 Number of Namespaces: 32 00:30:48.150 Compare Command: Supported 00:30:48.150 Write Uncorrectable Command: Not Supported 00:30:48.150 Dataset Management Command: Supported 00:30:48.150 Write Zeroes Command: Supported 00:30:48.150 Set Features Save Field: Not Supported 00:30:48.150 Reservations: Supported 00:30:48.150 Timestamp: Not Supported 00:30:48.150 Copy: Supported 00:30:48.150 Volatile Write Cache: Present 00:30:48.150 Atomic Write Unit (Normal): 1 00:30:48.150 Atomic Write Unit (PFail): 1 00:30:48.150 Atomic Compare & Write Unit: 1 00:30:48.150 Fused Compare & Write: Supported 00:30:48.150 Scatter-Gather List 00:30:48.150 SGL Command Set: Supported 00:30:48.150 SGL Keyed: Supported 00:30:48.150 SGL Bit Bucket Descriptor: Not Supported 00:30:48.150 SGL Metadata Pointer: Not Supported 00:30:48.150 Oversized SGL: Not Supported 00:30:48.150 SGL Metadata Address: Not Supported 00:30:48.150 SGL Offset: Supported 00:30:48.150 Transport SGL Data Block: Not Supported 00:30:48.150 Replay Protected Memory Block: Not Supported 00:30:48.150 00:30:48.150 Firmware Slot Information 00:30:48.150 ========================= 00:30:48.150 Active slot: 1 00:30:48.150 Slot 1 Firmware Revision: 25.01 00:30:48.150 00:30:48.150 00:30:48.150 Commands Supported and Effects 00:30:48.150 ============================== 00:30:48.150 Admin Commands 00:30:48.150 -------------- 00:30:48.150 Get Log Page (02h): Supported 00:30:48.150 Identify (06h): Supported 00:30:48.150 Abort (08h): Supported 00:30:48.150 Set Features (09h): Supported 00:30:48.150 Get Features (0Ah): Supported 00:30:48.150 Asynchronous Event Request (0Ch): Supported 00:30:48.150 Keep Alive (18h): Supported 00:30:48.150 I/O Commands 00:30:48.150 ------------ 00:30:48.150 Flush (00h): Supported LBA-Change 00:30:48.150 Write (01h): Supported LBA-Change 00:30:48.150 Read (02h): Supported 00:30:48.150 Compare (05h): Supported 00:30:48.150 Write Zeroes (08h): Supported LBA-Change 00:30:48.150 Dataset Management (09h): Supported LBA-Change 00:30:48.150 Copy (19h): Supported LBA-Change 00:30:48.150 00:30:48.150 Error Log 00:30:48.150 ========= 00:30:48.150 00:30:48.150 Arbitration 00:30:48.150 =========== 00:30:48.150 Arbitration Burst: 1 00:30:48.150 00:30:48.150 Power Management 00:30:48.150 ================ 00:30:48.150 Number of Power States: 1 00:30:48.150 Current Power State: Power State #0 00:30:48.150 Power State #0: 00:30:48.150 Max Power: 0.00 W 00:30:48.150 Non-Operational State: Operational 00:30:48.150 Entry Latency: Not Reported 00:30:48.150 Exit Latency: Not Reported 00:30:48.150 Relative Read Throughput: 0 00:30:48.150 Relative Read Latency: 0 00:30:48.150 Relative Write Throughput: 0 00:30:48.150 Relative Write Latency: 0 00:30:48.150 Idle Power: Not Reported 00:30:48.150 Active Power: Not Reported 00:30:48.150 Non-Operational Permissive Mode: Not Supported 00:30:48.150 00:30:48.150 Health Information 00:30:48.150 ================== 00:30:48.150 Critical Warnings: 00:30:48.150 Available Spare Space: OK 00:30:48.150 Temperature: OK 00:30:48.150 Device Reliability: OK 00:30:48.150 Read Only: No 00:30:48.150 Volatile Memory Backup: OK 00:30:48.150 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:48.150 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:48.150 Available Spare: 0% 00:30:48.150 Available Spare Threshold: 0% 00:30:48.150 Life Percentage Used:[2024-12-06 01:08:20.842413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.150 [2024-12-06 01:08:20.842433] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:48.150 [2024-12-06 01:08:20.842455] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.150 [2024-12-06 01:08:20.842490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:48.150 [2024-12-06 01:08:20.842612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.150 [2024-12-06 01:08:20.842634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.150 [2024-12-06 01:08:20.842647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.150 [2024-12-06 01:08:20.842659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:48.150 [2024-12-06 01:08:20.842734] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:30:48.150 [2024-12-06 01:08:20.842764] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:48.150 [2024-12-06 01:08:20.842795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.150 [2024-12-06 01:08:20.842811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:48.150 [2024-12-06 01:08:20.846845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.150 [2024-12-06 01:08:20.846862] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:48.151 [2024-12-06 01:08:20.846876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.151 [2024-12-06 01:08:20.846889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:48.151 [2024-12-06 01:08:20.846902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.151 [2024-12-06 01:08:20.846923] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.846938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.846950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:48.151 [2024-12-06 01:08:20.846970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.151 [2024-12-06 01:08:20.847006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:48.151 [2024-12-06 01:08:20.847139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.151 [2024-12-06 01:08:20.847163] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.151 [2024-12-06 01:08:20.847176] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.847188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:48.151 [2024-12-06 01:08:20.847217] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.847232] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.847244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:48.151 [2024-12-06 01:08:20.847271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.151 [2024-12-06 01:08:20.847314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:48.151 [2024-12-06 01:08:20.847460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.151 [2024-12-06 01:08:20.847481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.151 [2024-12-06 01:08:20.847492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.847503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:48.151 [2024-12-06 01:08:20.847518] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:30:48.151 [2024-12-06 01:08:20.847532] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:30:48.151 [2024-12-06 01:08:20.847558] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.847575] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.847593] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:48.151 [2024-12-06 01:08:20.847613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.151 [2024-12-06 01:08:20.847645] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:48.151 [2024-12-06 01:08:20.847754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.151 [2024-12-06 01:08:20.847775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.151 [2024-12-06 01:08:20.847791] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.847812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:48.151 [2024-12-06 01:08:20.847850] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.847866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.847877] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:48.151 [2024-12-06 01:08:20.847895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.151 [2024-12-06 01:08:20.847927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:48.151 [2024-12-06 01:08:20.848038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.151 [2024-12-06 01:08:20.848060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.151 [2024-12-06 01:08:20.848072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.848084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:48.151 [2024-12-06 01:08:20.848119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.848134] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.848145] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:48.151 [2024-12-06 01:08:20.848163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.151 [2024-12-06 01:08:20.848194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:48.151 [2024-12-06 01:08:20.848297] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.151 [2024-12-06 01:08:20.848318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.151 [2024-12-06 01:08:20.848329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.848341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:48.151 [2024-12-06 01:08:20.848367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.848382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.848393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:48.151 [2024-12-06 01:08:20.848411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.151 [2024-12-06 01:08:20.848442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:48.151 [2024-12-06 01:08:20.848548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.151 [2024-12-06 01:08:20.848569] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.151 [2024-12-06 01:08:20.848581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.848592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:48.151 [2024-12-06 01:08:20.848619] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.848634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.848644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:48.151 [2024-12-06 01:08:20.848662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.151 [2024-12-06 01:08:20.848692] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:48.151 [2024-12-06 01:08:20.848798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.151 [2024-12-06 01:08:20.848829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.151 [2024-12-06 01:08:20.848847] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.848859] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:48.151 [2024-12-06 01:08:20.848886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.848902] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.848913] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:48.151 [2024-12-06 01:08:20.848931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.151 [2024-12-06 01:08:20.848963] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:48.151 [2024-12-06 01:08:20.849066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.151 [2024-12-06 01:08:20.849087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.151 [2024-12-06 01:08:20.849098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.849109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:48.151 [2024-12-06 01:08:20.849136] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.849151] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.849162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:48.151 [2024-12-06 01:08:20.849180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.151 [2024-12-06 01:08:20.849211] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:48.151 [2024-12-06 01:08:20.849313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.151 [2024-12-06 01:08:20.849334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.151 [2024-12-06 01:08:20.849345] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.849357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:48.151 [2024-12-06 01:08:20.849383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.849398] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.849409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:48.151 [2024-12-06 01:08:20.849435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.151 [2024-12-06 01:08:20.849467] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:48.151 [2024-12-06 01:08:20.849569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.151 [2024-12-06 01:08:20.849595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.151 [2024-12-06 01:08:20.849607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.849618] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:48.151 [2024-12-06 01:08:20.849645] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.849660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.849671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:48.151 [2024-12-06 01:08:20.849689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.151 [2024-12-06 01:08:20.849719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:48.151 [2024-12-06 01:08:20.849829] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.151 [2024-12-06 01:08:20.849850] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.151 [2024-12-06 01:08:20.849866] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.151 [2024-12-06 01:08:20.849879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:48.152 [2024-12-06 01:08:20.849906] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.152 [2024-12-06 01:08:20.849921] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.152 [2024-12-06 01:08:20.849932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:48.152 [2024-12-06 01:08:20.849950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.152 [2024-12-06 01:08:20.849981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:48.152 [2024-12-06 01:08:20.850091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.152 [2024-12-06 01:08:20.850112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.152 [2024-12-06 01:08:20.850124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.152 [2024-12-06 01:08:20.850135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:48.152 [2024-12-06 01:08:20.850161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.152 [2024-12-06 01:08:20.850177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.152 [2024-12-06 01:08:20.850188] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:48.152 [2024-12-06 01:08:20.850206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.152 [2024-12-06 01:08:20.850249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:48.152 [2024-12-06 01:08:20.850355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.152 [2024-12-06 01:08:20.850377] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.152 [2024-12-06 01:08:20.850389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.152 [2024-12-06 01:08:20.850400] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:48.152 [2024-12-06 01:08:20.850427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.152 [2024-12-06 01:08:20.850442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.152 [2024-12-06 01:08:20.850453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:48.152 [2024-12-06 01:08:20.850471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.152 [2024-12-06 01:08:20.850502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:48.152 [2024-12-06 01:08:20.850609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.152 [2024-12-06 01:08:20.850630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.152 [2024-12-06 01:08:20.850642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.152 [2024-12-06 01:08:20.850653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:48.152 [2024-12-06 01:08:20.850680] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.152 [2024-12-06 01:08:20.850695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.152 [2024-12-06 01:08:20.850706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:48.152 [2024-12-06 01:08:20.850724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.152 [2024-12-06 01:08:20.850754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:48.409 [2024-12-06 01:08:20.854854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.409 [2024-12-06 01:08:20.854878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.409 [2024-12-06 01:08:20.854895] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.409 [2024-12-06 01:08:20.854907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:48.409 [2024-12-06 01:08:20.854949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:48.409 [2024-12-06 01:08:20.854966] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:48.409 [2024-12-06 01:08:20.854978] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:48.409 [2024-12-06 01:08:20.854997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.409 [2024-12-06 01:08:20.855029] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:48.409 [2024-12-06 01:08:20.855144] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:48.409 [2024-12-06 01:08:20.855165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:48.409 [2024-12-06 01:08:20.855176] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:48.409 [2024-12-06 01:08:20.855187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:48.409 [2024-12-06 01:08:20.855209] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:30:48.409 0% 00:30:48.409 Data Units Read: 0 00:30:48.409 Data Units Written: 0 00:30:48.409 Host Read Commands: 0 00:30:48.409 Host Write Commands: 0 00:30:48.409 Controller Busy Time: 0 minutes 00:30:48.409 Power Cycles: 0 00:30:48.409 Power On Hours: 0 hours 00:30:48.409 Unsafe Shutdowns: 0 00:30:48.409 Unrecoverable Media Errors: 0 00:30:48.409 Lifetime Error Log Entries: 0 00:30:48.409 Warning Temperature Time: 0 minutes 00:30:48.409 Critical Temperature Time: 0 minutes 00:30:48.409 00:30:48.409 Number of Queues 00:30:48.409 ================ 00:30:48.409 Number of I/O Submission Queues: 127 00:30:48.409 Number of I/O Completion Queues: 127 00:30:48.409 00:30:48.409 Active Namespaces 00:30:48.409 ================= 00:30:48.409 Namespace ID:1 00:30:48.409 Error Recovery Timeout: Unlimited 00:30:48.409 Command Set Identifier: NVM (00h) 00:30:48.409 Deallocate: Supported 00:30:48.409 Deallocated/Unwritten Error: Not Supported 00:30:48.409 Deallocated Read Value: Unknown 00:30:48.409 Deallocate in Write Zeroes: Not Supported 00:30:48.409 Deallocated Guard Field: 0xFFFF 00:30:48.409 Flush: Supported 00:30:48.409 Reservation: Supported 00:30:48.409 Namespace Sharing Capabilities: Multiple Controllers 00:30:48.409 Size (in LBAs): 131072 (0GiB) 00:30:48.409 Capacity (in LBAs): 131072 (0GiB) 00:30:48.409 Utilization (in LBAs): 131072 (0GiB) 00:30:48.409 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:48.409 EUI64: ABCDEF0123456789 00:30:48.409 UUID: 4a5d2d16-d571-4b0b-93f0-aa27c2445432 00:30:48.409 Thin Provisioning: Not Supported 00:30:48.409 Per-NS Atomic Units: Yes 00:30:48.409 Atomic Boundary Size (Normal): 0 00:30:48.409 Atomic Boundary Size (PFail): 0 00:30:48.409 Atomic Boundary Offset: 0 00:30:48.410 Maximum Single Source Range Length: 65535 00:30:48.410 Maximum Copy Length: 65535 00:30:48.410 Maximum Source Range Count: 1 00:30:48.410 NGUID/EUI64 Never Reused: No 00:30:48.410 Namespace Write Protected: No 00:30:48.410 Number of LBA Formats: 1 00:30:48.410 Current LBA Format: LBA Format #00 00:30:48.410 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:48.410 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:48.410 rmmod nvme_tcp 00:30:48.410 rmmod nvme_fabrics 00:30:48.410 rmmod nvme_keyring 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3072642 ']' 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3072642 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3072642 ']' 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3072642 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:48.410 01:08:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3072642 00:30:48.410 01:08:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:48.410 01:08:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:48.410 01:08:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3072642' 00:30:48.410 killing process with pid 3072642 00:30:48.410 01:08:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3072642 00:30:48.410 01:08:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3072642 00:30:49.802 01:08:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:49.802 01:08:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:49.802 01:08:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:49.802 01:08:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:30:49.802 01:08:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:30:49.802 01:08:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:49.802 01:08:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:30:49.802 01:08:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:49.802 01:08:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:49.802 01:08:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.802 01:08:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.802 01:08:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.749 01:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:51.749 00:30:51.749 real 0m7.690s 00:30:51.749 user 0m11.893s 00:30:51.749 sys 0m2.155s 00:30:51.749 01:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:51.749 01:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:51.749 ************************************ 00:30:51.749 END TEST nvmf_identify 00:30:51.749 ************************************ 00:30:51.749 01:08:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:51.749 01:08:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:51.749 01:08:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:51.749 01:08:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.749 ************************************ 00:30:51.749 START TEST nvmf_perf 00:30:51.749 ************************************ 00:30:51.749 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:51.749 * Looking for test storage... 00:30:51.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:51.749 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:51.749 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:30:51.749 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:52.008 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:52.008 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:52.008 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:52.008 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:52.008 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:52.008 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:52.008 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:52.008 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:52.008 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:52.008 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:52.008 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:52.008 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:52.008 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:52.008 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:52.008 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:52.008 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:52.008 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:52.008 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:52.008 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:52.008 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:52.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.009 --rc genhtml_branch_coverage=1 00:30:52.009 --rc genhtml_function_coverage=1 00:30:52.009 --rc genhtml_legend=1 00:30:52.009 --rc geninfo_all_blocks=1 00:30:52.009 --rc geninfo_unexecuted_blocks=1 00:30:52.009 00:30:52.009 ' 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:52.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.009 --rc genhtml_branch_coverage=1 00:30:52.009 --rc genhtml_function_coverage=1 00:30:52.009 --rc genhtml_legend=1 00:30:52.009 --rc geninfo_all_blocks=1 00:30:52.009 --rc geninfo_unexecuted_blocks=1 00:30:52.009 00:30:52.009 ' 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:52.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.009 --rc genhtml_branch_coverage=1 00:30:52.009 --rc genhtml_function_coverage=1 00:30:52.009 --rc genhtml_legend=1 00:30:52.009 --rc geninfo_all_blocks=1 00:30:52.009 --rc geninfo_unexecuted_blocks=1 00:30:52.009 00:30:52.009 ' 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:52.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.009 --rc genhtml_branch_coverage=1 00:30:52.009 --rc genhtml_function_coverage=1 00:30:52.009 --rc genhtml_legend=1 00:30:52.009 --rc geninfo_all_blocks=1 00:30:52.009 --rc geninfo_unexecuted_blocks=1 00:30:52.009 00:30:52.009 ' 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:52.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:52.009 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:52.010 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:52.010 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:52.010 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:52.010 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:52.010 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:52.010 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:52.010 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.010 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:52.010 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.010 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:52.010 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:52.010 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:52.010 01:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:53.910 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:53.910 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:53.910 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:53.910 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:53.910 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:53.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:53.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:30:53.911 00:30:53.911 --- 10.0.0.2 ping statistics --- 00:30:53.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.911 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:53.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:53.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:30:53.911 00:30:53.911 --- 10.0.0.1 ping statistics --- 00:30:53.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.911 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3074995 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3074995 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3074995 ']' 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:53.911 01:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:54.179 [2024-12-06 01:08:26.692482] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:30:54.179 [2024-12-06 01:08:26.692624] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:54.179 [2024-12-06 01:08:26.838584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:54.437 [2024-12-06 01:08:26.974064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:54.437 [2024-12-06 01:08:26.974159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:54.437 [2024-12-06 01:08:26.974185] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:54.437 [2024-12-06 01:08:26.974210] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:54.437 [2024-12-06 01:08:26.974230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:54.437 [2024-12-06 01:08:26.977004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.437 [2024-12-06 01:08:26.977078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:54.437 [2024-12-06 01:08:26.977172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.437 [2024-12-06 01:08:26.977178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:55.003 01:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:55.003 01:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:30:55.003 01:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:55.003 01:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:55.003 01:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:55.003 01:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:55.003 01:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:55.003 01:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:58.284 01:08:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:58.284 01:08:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:58.541 01:08:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:58.541 01:08:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:58.799 01:08:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:58.799 01:08:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:58.799 01:08:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:58.799 01:08:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:58.799 01:08:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:59.057 [2024-12-06 01:08:31.727159] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:59.057 01:08:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:59.315 01:08:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:59.315 01:08:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:59.880 01:08:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:59.880 01:08:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:00.138 01:08:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:00.397 [2024-12-06 01:08:32.861966] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:00.397 01:08:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:00.655 01:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:31:00.655 01:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:31:00.655 01:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:00.655 01:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:31:02.028 Initializing NVMe Controllers 00:31:02.028 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:31:02.028 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:31:02.028 Initialization complete. Launching workers. 00:31:02.028 ======================================================== 00:31:02.028 Latency(us) 00:31:02.028 Device Information : IOPS MiB/s Average min max 00:31:02.028 PCIE (0000:88:00.0) NSID 1 from core 0: 74555.72 291.23 428.47 42.71 4406.48 00:31:02.028 ======================================================== 00:31:02.028 Total : 74555.72 291.23 428.47 42.71 4406.48 00:31:02.028 00:31:02.028 01:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:03.407 Initializing NVMe Controllers 00:31:03.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:03.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:03.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:03.407 Initialization complete. Launching workers. 00:31:03.407 ======================================================== 00:31:03.407 Latency(us) 00:31:03.407 Device Information : IOPS MiB/s Average min max 00:31:03.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 84.00 0.33 12268.10 207.83 45516.59 00:31:03.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17936.68 7930.59 48730.91 00:31:03.407 ======================================================== 00:31:03.407 Total : 140.00 0.55 14535.53 207.83 48730.91 00:31:03.407 00:31:03.407 01:08:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:04.779 Initializing NVMe Controllers 00:31:04.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:04.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:04.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:04.780 Initialization complete. Launching workers. 00:31:04.780 ======================================================== 00:31:04.780 Latency(us) 00:31:04.780 Device Information : IOPS MiB/s Average min max 00:31:04.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5849.00 22.85 5475.14 1036.03 12139.62 00:31:04.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3832.00 14.97 8392.51 6453.62 16349.51 00:31:04.780 ======================================================== 00:31:04.780 Total : 9681.00 37.82 6629.92 1036.03 16349.51 00:31:04.780 00:31:04.780 01:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:04.780 01:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:04.780 01:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:08.073 Initializing NVMe Controllers 00:31:08.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:08.073 Controller IO queue size 128, less than required. 00:31:08.073 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:08.073 Controller IO queue size 128, less than required. 00:31:08.073 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:08.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:08.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:08.073 Initialization complete. Launching workers. 00:31:08.073 ======================================================== 00:31:08.073 Latency(us) 00:31:08.073 Device Information : IOPS MiB/s Average min max 00:31:08.073 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1293.76 323.44 103756.57 62282.42 318587.55 00:31:08.073 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 541.06 135.27 254380.69 136829.83 479177.10 00:31:08.073 ======================================================== 00:31:08.073 Total : 1834.82 458.70 148173.48 62282.42 479177.10 00:31:08.073 00:31:08.073 01:08:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:08.073 No valid NVMe controllers or AIO or URING devices found 00:31:08.073 Initializing NVMe Controllers 00:31:08.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:08.073 Controller IO queue size 128, less than required. 00:31:08.073 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:08.073 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:08.073 Controller IO queue size 128, less than required. 00:31:08.073 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:08.073 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:08.073 WARNING: Some requested NVMe devices were skipped 00:31:08.073 01:08:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:11.361 Initializing NVMe Controllers 00:31:11.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:11.361 Controller IO queue size 128, less than required. 00:31:11.361 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:11.361 Controller IO queue size 128, less than required. 00:31:11.361 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:11.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:11.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:11.361 Initialization complete. Launching workers. 00:31:11.361 00:31:11.361 ==================== 00:31:11.361 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:11.361 TCP transport: 00:31:11.361 polls: 5893 00:31:11.361 idle_polls: 3298 00:31:11.361 sock_completions: 2595 00:31:11.361 nvme_completions: 4797 00:31:11.361 submitted_requests: 7138 00:31:11.361 queued_requests: 1 00:31:11.361 00:31:11.361 ==================== 00:31:11.361 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:11.361 TCP transport: 00:31:11.361 polls: 8659 00:31:11.361 idle_polls: 6142 00:31:11.361 sock_completions: 2517 00:31:11.361 nvme_completions: 4825 00:31:11.361 submitted_requests: 7224 00:31:11.361 queued_requests: 1 00:31:11.361 ======================================================== 00:31:11.361 Latency(us) 00:31:11.361 Device Information : IOPS MiB/s Average min max 00:31:11.361 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1198.53 299.63 113594.29 70619.51 397764.37 00:31:11.361 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1205.53 301.38 107439.58 47366.05 334057.24 00:31:11.361 ======================================================== 00:31:11.361 Total : 2404.06 601.01 110507.98 47366.05 397764.37 00:31:11.361 00:31:11.361 01:08:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:11.361 01:08:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:11.361 01:08:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:11.361 01:08:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:31:11.361 01:08:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:14.638 01:08:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=6d40de6a-860f-4e90-b1e4-699b7d3597bf 00:31:14.638 01:08:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 6d40de6a-860f-4e90-b1e4-699b7d3597bf 00:31:14.638 01:08:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=6d40de6a-860f-4e90-b1e4-699b7d3597bf 00:31:14.638 01:08:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:14.638 01:08:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:14.638 01:08:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:14.638 01:08:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:15.202 01:08:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:15.202 { 00:31:15.202 "uuid": "6d40de6a-860f-4e90-b1e4-699b7d3597bf", 00:31:15.202 "name": "lvs_0", 00:31:15.202 "base_bdev": "Nvme0n1", 00:31:15.202 "total_data_clusters": 238234, 00:31:15.202 "free_clusters": 238234, 00:31:15.202 "block_size": 512, 00:31:15.202 "cluster_size": 4194304 00:31:15.202 } 00:31:15.202 ]' 00:31:15.202 01:08:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="6d40de6a-860f-4e90-b1e4-699b7d3597bf") .free_clusters' 00:31:15.202 01:08:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:31:15.202 01:08:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="6d40de6a-860f-4e90-b1e4-699b7d3597bf") .cluster_size' 00:31:15.202 01:08:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:15.202 01:08:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:31:15.202 01:08:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:31:15.202 952936 00:31:15.202 01:08:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:31:15.202 01:08:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:15.202 01:08:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6d40de6a-860f-4e90-b1e4-699b7d3597bf lbd_0 20480 00:31:15.460 01:08:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=a9b6e4e7-9be5-41de-99d4-e9841ba7090c 00:31:15.460 01:08:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore a9b6e4e7-9be5-41de-99d4-e9841ba7090c lvs_n_0 00:31:16.389 01:08:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=d60dc220-8c1a-4668-9f13-c1160a063ac6 00:31:16.389 01:08:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb d60dc220-8c1a-4668-9f13-c1160a063ac6 00:31:16.390 01:08:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=d60dc220-8c1a-4668-9f13-c1160a063ac6 00:31:16.390 01:08:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:16.390 01:08:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:16.390 01:08:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:16.390 01:08:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:16.647 01:08:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:16.647 { 00:31:16.647 "uuid": "6d40de6a-860f-4e90-b1e4-699b7d3597bf", 00:31:16.647 "name": "lvs_0", 00:31:16.647 "base_bdev": "Nvme0n1", 00:31:16.647 "total_data_clusters": 238234, 00:31:16.647 "free_clusters": 233114, 00:31:16.647 "block_size": 512, 00:31:16.647 "cluster_size": 4194304 00:31:16.647 }, 00:31:16.647 { 00:31:16.647 "uuid": "d60dc220-8c1a-4668-9f13-c1160a063ac6", 00:31:16.647 "name": "lvs_n_0", 00:31:16.647 "base_bdev": "a9b6e4e7-9be5-41de-99d4-e9841ba7090c", 00:31:16.647 "total_data_clusters": 5114, 00:31:16.647 "free_clusters": 5114, 00:31:16.647 "block_size": 512, 00:31:16.647 "cluster_size": 4194304 00:31:16.647 } 00:31:16.647 ]' 00:31:16.647 01:08:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="d60dc220-8c1a-4668-9f13-c1160a063ac6") .free_clusters' 00:31:16.647 01:08:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:31:16.647 01:08:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="d60dc220-8c1a-4668-9f13-c1160a063ac6") .cluster_size' 00:31:16.648 01:08:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:16.648 01:08:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:31:16.648 01:08:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:31:16.648 20456 00:31:16.648 01:08:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:16.648 01:08:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d60dc220-8c1a-4668-9f13-c1160a063ac6 lbd_nest_0 20456 00:31:17.211 01:08:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=f06780d6-face-419d-bda7-ec480c7b4a9f 00:31:17.211 01:08:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:17.211 01:08:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:17.211 01:08:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 f06780d6-face-419d-bda7-ec480c7b4a9f 00:31:17.775 01:08:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:17.775 01:08:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:17.775 01:08:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:17.775 01:08:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:17.775 01:08:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:17.775 01:08:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:29.970 Initializing NVMe Controllers 00:31:29.970 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:29.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:29.970 Initialization complete. Launching workers. 00:31:29.970 ======================================================== 00:31:29.970 Latency(us) 00:31:29.970 Device Information : IOPS MiB/s Average min max 00:31:29.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.70 0.02 20985.75 233.57 45763.00 00:31:29.970 ======================================================== 00:31:29.970 Total : 47.70 0.02 20985.75 233.57 45763.00 00:31:29.970 00:31:29.970 01:09:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:29.970 01:09:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:39.955 Initializing NVMe Controllers 00:31:39.955 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:39.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:39.955 Initialization complete. Launching workers. 00:31:39.955 ======================================================== 00:31:39.955 Latency(us) 00:31:39.955 Device Information : IOPS MiB/s Average min max 00:31:39.955 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 77.48 9.69 12905.37 4779.51 47900.97 00:31:39.955 ======================================================== 00:31:39.955 Total : 77.48 9.69 12905.37 4779.51 47900.97 00:31:39.955 00:31:39.955 01:09:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:39.955 01:09:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:39.955 01:09:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:49.985 Initializing NVMe Controllers 00:31:49.985 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:49.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:49.985 Initialization complete. Launching workers. 00:31:49.985 ======================================================== 00:31:49.985 Latency(us) 00:31:49.985 Device Information : IOPS MiB/s Average min max 00:31:49.985 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4748.55 2.32 6742.88 624.20 14147.76 00:31:49.985 ======================================================== 00:31:49.985 Total : 4748.55 2.32 6742.88 624.20 14147.76 00:31:49.985 00:31:49.985 01:09:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:49.985 01:09:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:59.953 Initializing NVMe Controllers 00:31:59.953 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:59.953 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:59.953 Initialization complete. Launching workers. 00:31:59.953 ======================================================== 00:31:59.953 Latency(us) 00:31:59.953 Device Information : IOPS MiB/s Average min max 00:31:59.953 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3586.07 448.26 8925.21 1051.66 21146.63 00:31:59.953 ======================================================== 00:31:59.953 Total : 3586.07 448.26 8925.21 1051.66 21146.63 00:31:59.953 00:31:59.953 01:09:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:59.953 01:09:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:59.953 01:09:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:09.922 Initializing NVMe Controllers 00:32:09.922 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:09.922 Controller IO queue size 128, less than required. 00:32:09.922 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:09.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:09.922 Initialization complete. Launching workers. 00:32:09.922 ======================================================== 00:32:09.922 Latency(us) 00:32:09.922 Device Information : IOPS MiB/s Average min max 00:32:09.922 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8363.42 4.08 15317.05 1814.77 37486.00 00:32:09.922 ======================================================== 00:32:09.922 Total : 8363.42 4.08 15317.05 1814.77 37486.00 00:32:09.922 00:32:10.180 01:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:10.180 01:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:22.372 Initializing NVMe Controllers 00:32:22.372 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:22.372 Controller IO queue size 128, less than required. 00:32:22.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:22.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:22.373 Initialization complete. Launching workers. 00:32:22.373 ======================================================== 00:32:22.373 Latency(us) 00:32:22.373 Device Information : IOPS MiB/s Average min max 00:32:22.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1148.42 143.55 111477.60 15655.93 221065.74 00:32:22.373 ======================================================== 00:32:22.373 Total : 1148.42 143.55 111477.60 15655.93 221065.74 00:32:22.373 00:32:22.373 01:09:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:22.373 01:09:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f06780d6-face-419d-bda7-ec480c7b4a9f 00:32:22.373 01:09:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:22.373 01:09:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a9b6e4e7-9be5-41de-99d4-e9841ba7090c 00:32:22.373 01:09:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:22.630 rmmod nvme_tcp 00:32:22.630 rmmod nvme_fabrics 00:32:22.630 rmmod nvme_keyring 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3074995 ']' 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3074995 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3074995 ']' 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3074995 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3074995 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3074995' 00:32:22.630 killing process with pid 3074995 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3074995 00:32:22.630 01:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3074995 00:32:25.153 01:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:25.153 01:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:25.153 01:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:25.153 01:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:32:25.153 01:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:32:25.153 01:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:25.153 01:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:32:25.153 01:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:25.153 01:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:25.153 01:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.153 01:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.153 01:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:27.684 00:32:27.684 real 1m35.388s 00:32:27.684 user 5m54.323s 00:32:27.684 sys 0m14.974s 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:27.684 ************************************ 00:32:27.684 END TEST nvmf_perf 00:32:27.684 ************************************ 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.684 ************************************ 00:32:27.684 START TEST nvmf_fio_host 00:32:27.684 ************************************ 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:27.684 * Looking for test storage... 00:32:27.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:27.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.684 --rc genhtml_branch_coverage=1 00:32:27.684 --rc genhtml_function_coverage=1 00:32:27.684 --rc genhtml_legend=1 00:32:27.684 --rc geninfo_all_blocks=1 00:32:27.684 --rc geninfo_unexecuted_blocks=1 00:32:27.684 00:32:27.684 ' 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:27.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.684 --rc genhtml_branch_coverage=1 00:32:27.684 --rc genhtml_function_coverage=1 00:32:27.684 --rc genhtml_legend=1 00:32:27.684 --rc geninfo_all_blocks=1 00:32:27.684 --rc geninfo_unexecuted_blocks=1 00:32:27.684 00:32:27.684 ' 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:27.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.684 --rc genhtml_branch_coverage=1 00:32:27.684 --rc genhtml_function_coverage=1 00:32:27.684 --rc genhtml_legend=1 00:32:27.684 --rc geninfo_all_blocks=1 00:32:27.684 --rc geninfo_unexecuted_blocks=1 00:32:27.684 00:32:27.684 ' 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:27.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.684 --rc genhtml_branch_coverage=1 00:32:27.684 --rc genhtml_function_coverage=1 00:32:27.684 --rc genhtml_legend=1 00:32:27.684 --rc geninfo_all_blocks=1 00:32:27.684 --rc geninfo_unexecuted_blocks=1 00:32:27.684 00:32:27.684 ' 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.684 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:27.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:27.685 01:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:29.589 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:29.589 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:29.589 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:29.589 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:29.589 01:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:29.589 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:29.589 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:29.589 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:29.589 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:29.589 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:29.589 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:29.589 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:29.589 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:29.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:29.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:32:29.589 00:32:29.589 --- 10.0.0.2 ping statistics --- 00:32:29.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.589 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:32:29.589 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:29.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:29.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:32:29.590 00:32:29.590 --- 10.0.0.1 ping statistics --- 00:32:29.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.590 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3088112 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3088112 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3088112 ']' 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:29.590 01:10:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.590 [2024-12-06 01:10:02.182926] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:32:29.590 [2024-12-06 01:10:02.183070] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:29.847 [2024-12-06 01:10:02.339772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:29.847 [2024-12-06 01:10:02.483600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:29.847 [2024-12-06 01:10:02.483680] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:29.847 [2024-12-06 01:10:02.483712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:29.847 [2024-12-06 01:10:02.483737] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:29.847 [2024-12-06 01:10:02.483758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:29.848 [2024-12-06 01:10:02.486541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:29.848 [2024-12-06 01:10:02.486612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:29.848 [2024-12-06 01:10:02.486702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.848 [2024-12-06 01:10:02.486708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:30.781 01:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:30.781 01:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:32:30.781 01:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:30.781 [2024-12-06 01:10:03.424473] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:30.781 01:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:30.781 01:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:30.781 01:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.781 01:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:31.347 Malloc1 00:32:31.347 01:10:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:31.606 01:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:31.864 01:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.123 [2024-12-06 01:10:04.607422] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.123 01:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:32.380 01:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:32.380 01:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:32.380 01:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:32.380 01:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:32.380 01:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:32.380 01:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:32.380 01:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:32.380 01:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:32.380 01:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:32.380 01:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:32.380 01:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:32.380 01:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:32.380 01:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:32.380 01:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:32.380 01:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:32.380 01:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:32.380 01:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:32.380 01:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:32.637 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:32.637 fio-3.35 00:32:32.637 Starting 1 thread 00:32:35.161 00:32:35.161 test: (groupid=0, jobs=1): err= 0: pid=3088593: Fri Dec 6 01:10:07 2024 00:32:35.161 read: IOPS=6443, BW=25.2MiB/s (26.4MB/s)(50.6MiB/2009msec) 00:32:35.161 slat (usec): min=2, max=131, avg= 3.48, stdev= 1.83 00:32:35.161 clat (usec): min=3472, max=18986, avg=10792.00, stdev=930.31 00:32:35.161 lat (usec): min=3502, max=18989, avg=10795.48, stdev=930.24 00:32:35.161 clat percentiles (usec): 00:32:35.161 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:32:35.161 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:32:35.161 | 70.00th=[11207], 80.00th=[11600], 90.00th=[11863], 95.00th=[12125], 00:32:35.161 | 99.00th=[12780], 99.50th=[12911], 99.90th=[16712], 99.95th=[17171], 00:32:35.161 | 99.99th=[18482] 00:32:35.161 bw ( KiB/s): min=24504, max=26408, per=99.93%, avg=25756.00, stdev=865.32, samples=4 00:32:35.161 iops : min= 6126, max= 6602, avg=6439.00, stdev=216.33, samples=4 00:32:35.161 write: IOPS=6450, BW=25.2MiB/s (26.4MB/s)(50.6MiB/2009msec); 0 zone resets 00:32:35.161 slat (usec): min=2, max=107, avg= 3.51, stdev= 1.57 00:32:35.161 clat (usec): min=1270, max=17065, avg=8947.36, stdev=785.75 00:32:35.161 lat (usec): min=1286, max=17069, avg=8950.88, stdev=785.73 00:32:35.161 clat percentiles (usec): 00:32:35.161 | 1.00th=[ 7242], 5.00th=[ 7832], 10.00th=[ 8094], 20.00th=[ 8356], 00:32:35.161 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9110], 00:32:35.161 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9765], 95.00th=[10028], 00:32:35.161 | 99.00th=[10552], 99.50th=[10945], 99.90th=[15926], 99.95th=[16581], 00:32:35.161 | 99.99th=[16909] 00:32:35.161 bw ( KiB/s): min=25536, max=26136, per=99.98%, avg=25798.00, stdev=255.70, samples=4 00:32:35.161 iops : min= 6384, max= 6534, avg=6449.50, stdev=63.92, samples=4 00:32:35.161 lat (msec) : 2=0.01%, 4=0.08%, 10=55.55%, 20=44.35% 00:32:35.161 cpu : usr=69.02%, sys=29.43%, ctx=61, majf=0, minf=1544 00:32:35.161 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:35.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:35.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:35.161 issued rwts: total=12945,12960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:35.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:35.161 00:32:35.161 Run status group 0 (all jobs): 00:32:35.161 READ: bw=25.2MiB/s (26.4MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=50.6MiB (53.0MB), run=2009-2009msec 00:32:35.161 WRITE: bw=25.2MiB/s (26.4MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=50.6MiB (53.1MB), run=2009-2009msec 00:32:35.420 ----------------------------------------------------- 00:32:35.420 Suppressions used: 00:32:35.420 count bytes template 00:32:35.420 1 57 /usr/src/fio/parse.c 00:32:35.420 1 8 libtcmalloc_minimal.so 00:32:35.420 ----------------------------------------------------- 00:32:35.420 00:32:35.420 01:10:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:35.420 01:10:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:35.420 01:10:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:35.420 01:10:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:35.420 01:10:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:35.420 01:10:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:35.420 01:10:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:35.420 01:10:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:35.420 01:10:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:35.420 01:10:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:35.420 01:10:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:35.420 01:10:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:35.420 01:10:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:35.420 01:10:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:35.420 01:10:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:35.420 01:10:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:35.420 01:10:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:35.678 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:35.678 fio-3.35 00:32:35.678 Starting 1 thread 00:32:38.210 00:32:38.210 test: (groupid=0, jobs=1): err= 0: pid=3088928: Fri Dec 6 01:10:10 2024 00:32:38.210 read: IOPS=6087, BW=95.1MiB/s (99.7MB/s)(191MiB/2008msec) 00:32:38.210 slat (usec): min=3, max=104, avg= 4.87, stdev= 2.05 00:32:38.210 clat (usec): min=4281, max=57435, avg=12200.11, stdev=4653.33 00:32:38.210 lat (usec): min=4285, max=57440, avg=12204.98, stdev=4653.35 00:32:38.210 clat percentiles (usec): 00:32:38.210 | 1.00th=[ 6456], 5.00th=[ 8160], 10.00th=[ 8979], 20.00th=[ 9896], 00:32:38.210 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11731], 60.00th=[12256], 00:32:38.210 | 70.00th=[12780], 80.00th=[13566], 90.00th=[15008], 95.00th=[16450], 00:32:38.210 | 99.00th=[46400], 99.50th=[51643], 99.90th=[55313], 99.95th=[56886], 00:32:38.210 | 99.99th=[57410] 00:32:38.210 bw ( KiB/s): min=41152, max=54400, per=49.59%, avg=48304.00, stdev=6768.56, samples=4 00:32:38.210 iops : min= 2572, max= 3400, avg=3019.00, stdev=423.04, samples=4 00:32:38.210 write: IOPS=3597, BW=56.2MiB/s (58.9MB/s)(98.2MiB/1746msec); 0 zone resets 00:32:38.210 slat (usec): min=32, max=211, avg=36.80, stdev= 7.21 00:32:38.210 clat (usec): min=7496, max=24794, avg=16206.26, stdev=2496.85 00:32:38.210 lat (usec): min=7529, max=24828, avg=16243.06, stdev=2496.87 00:32:38.210 clat percentiles (usec): 00:32:38.210 | 1.00th=[11076], 5.00th=[12125], 10.00th=[12911], 20.00th=[13960], 00:32:38.210 | 30.00th=[14877], 40.00th=[15533], 50.00th=[16319], 60.00th=[16909], 00:32:38.210 | 70.00th=[17433], 80.00th=[18220], 90.00th=[19268], 95.00th=[20579], 00:32:38.210 | 99.00th=[22152], 99.50th=[22938], 99.90th=[24249], 99.95th=[24511], 00:32:38.210 | 99.99th=[24773] 00:32:38.210 bw ( KiB/s): min=42976, max=56352, per=87.30%, avg=50256.00, stdev=6439.85, samples=4 00:32:38.210 iops : min= 2686, max= 3522, avg=3141.00, stdev=402.49, samples=4 00:32:38.210 lat (msec) : 10=13.80%, 20=83.33%, 50=2.42%, 100=0.46% 00:32:38.210 cpu : usr=76.48%, sys=22.12%, ctx=49, majf=0, minf=2062 00:32:38.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:32:38.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:38.210 issued rwts: total=12224,6282,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:38.210 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:38.210 00:32:38.210 Run status group 0 (all jobs): 00:32:38.210 READ: bw=95.1MiB/s (99.7MB/s), 95.1MiB/s-95.1MiB/s (99.7MB/s-99.7MB/s), io=191MiB (200MB), run=2008-2008msec 00:32:38.210 WRITE: bw=56.2MiB/s (58.9MB/s), 56.2MiB/s-56.2MiB/s (58.9MB/s-58.9MB/s), io=98.2MiB (103MB), run=1746-1746msec 00:32:38.468 ----------------------------------------------------- 00:32:38.468 Suppressions used: 00:32:38.468 count bytes template 00:32:38.468 1 57 /usr/src/fio/parse.c 00:32:38.468 96 9216 /usr/src/fio/iolog.c 00:32:38.468 1 8 libtcmalloc_minimal.so 00:32:38.468 ----------------------------------------------------- 00:32:38.468 00:32:38.468 01:10:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:38.727 01:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:38.727 01:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:38.727 01:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:38.727 01:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:38.727 01:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:32:38.727 01:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:38.727 01:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:38.727 01:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:38.727 01:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:38.727 01:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:32:38.727 01:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:32:42.011 Nvme0n1 00:32:42.011 01:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:45.280 01:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=88433c86-9597-43c0-85c4-815a26ab633a 00:32:45.280 01:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 88433c86-9597-43c0-85c4-815a26ab633a 00:32:45.280 01:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=88433c86-9597-43c0-85c4-815a26ab633a 00:32:45.280 01:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:45.280 01:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:45.280 01:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:45.280 01:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:45.280 01:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:45.280 { 00:32:45.280 "uuid": "88433c86-9597-43c0-85c4-815a26ab633a", 00:32:45.280 "name": "lvs_0", 00:32:45.280 "base_bdev": "Nvme0n1", 00:32:45.280 "total_data_clusters": 930, 00:32:45.280 "free_clusters": 930, 00:32:45.281 "block_size": 512, 00:32:45.281 "cluster_size": 1073741824 00:32:45.281 } 00:32:45.281 ]' 00:32:45.281 01:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="88433c86-9597-43c0-85c4-815a26ab633a") .free_clusters' 00:32:45.281 01:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:32:45.281 01:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="88433c86-9597-43c0-85c4-815a26ab633a") .cluster_size' 00:32:45.281 01:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:32:45.281 01:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:32:45.281 01:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:32:45.281 952320 00:32:45.281 01:10:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:45.537 a9d44a6b-ec58-4fb1-b53c-0e2851190bcb 00:32:45.537 01:10:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:45.795 01:10:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:46.095 01:10:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:46.382 01:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:46.382 01:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:46.382 01:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:46.382 01:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:46.382 01:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:46.382 01:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:46.382 01:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:46.382 01:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:46.382 01:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:46.382 01:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:46.382 01:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:46.382 01:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:46.382 01:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:46.382 01:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:46.382 01:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:46.382 01:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:46.382 01:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:46.661 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:46.661 fio-3.35 00:32:46.661 Starting 1 thread 00:32:49.197 00:32:49.197 test: (groupid=0, jobs=1): err= 0: pid=3090328: Fri Dec 6 01:10:21 2024 00:32:49.197 read: IOPS=4334, BW=16.9MiB/s (17.8MB/s)(34.0MiB/2010msec) 00:32:49.197 slat (usec): min=2, max=181, avg= 3.68, stdev= 2.94 00:32:49.197 clat (usec): min=1514, max=172850, avg=15978.12, stdev=13293.98 00:32:49.197 lat (usec): min=1518, max=172907, avg=15981.80, stdev=13294.40 00:32:49.197 clat percentiles (msec): 00:32:49.197 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 14], 00:32:49.197 | 30.00th=[ 15], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 16], 00:32:49.197 | 70.00th=[ 16], 80.00th=[ 17], 90.00th=[ 17], 95.00th=[ 18], 00:32:49.197 | 99.00th=[ 20], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 174], 00:32:49.197 | 99.99th=[ 174] 00:32:49.197 bw ( KiB/s): min=11984, max=19240, per=99.69%, avg=17286.00, stdev=3540.05, samples=4 00:32:49.197 iops : min= 2996, max= 4810, avg=4321.50, stdev=885.01, samples=4 00:32:49.197 write: IOPS=4333, BW=16.9MiB/s (17.7MB/s)(34.0MiB/2010msec); 0 zone resets 00:32:49.197 slat (usec): min=2, max=130, avg= 3.85, stdev= 2.18 00:32:49.197 clat (usec): min=519, max=170321, avg=13262.87, stdev=12502.02 00:32:49.197 lat (usec): min=522, max=170329, avg=13266.72, stdev=12502.37 00:32:49.197 clat percentiles (msec): 00:32:49.197 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:32:49.197 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 13], 00:32:49.197 | 70.00th=[ 13], 80.00th=[ 14], 90.00th=[ 14], 95.00th=[ 15], 00:32:49.197 | 99.00th=[ 16], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:32:49.197 | 99.99th=[ 171] 00:32:49.197 bw ( KiB/s): min=12648, max=19072, per=99.84%, avg=17306.00, stdev=3109.00, samples=4 00:32:49.197 iops : min= 3162, max= 4768, avg=4326.50, stdev=777.25, samples=4 00:32:49.197 lat (usec) : 750=0.01% 00:32:49.197 lat (msec) : 2=0.05%, 4=0.08%, 10=1.30%, 20=97.72%, 50=0.10% 00:32:49.197 lat (msec) : 250=0.73% 00:32:49.197 cpu : usr=64.91%, sys=33.75%, ctx=83, majf=0, minf=1542 00:32:49.197 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:49.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:49.197 issued rwts: total=8713,8710,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.197 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:49.197 00:32:49.197 Run status group 0 (all jobs): 00:32:49.197 READ: bw=16.9MiB/s (17.8MB/s), 16.9MiB/s-16.9MiB/s (17.8MB/s-17.8MB/s), io=34.0MiB (35.7MB), run=2010-2010msec 00:32:49.197 WRITE: bw=16.9MiB/s (17.7MB/s), 16.9MiB/s-16.9MiB/s (17.7MB/s-17.7MB/s), io=34.0MiB (35.7MB), run=2010-2010msec 00:32:49.455 ----------------------------------------------------- 00:32:49.455 Suppressions used: 00:32:49.455 count bytes template 00:32:49.455 1 58 /usr/src/fio/parse.c 00:32:49.455 1 8 libtcmalloc_minimal.so 00:32:49.455 ----------------------------------------------------- 00:32:49.455 00:32:49.455 01:10:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:49.713 01:10:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:51.086 01:10:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=90033669-100e-4e61-b762-b5b2227a6232 00:32:51.086 01:10:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 90033669-100e-4e61-b762-b5b2227a6232 00:32:51.086 01:10:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=90033669-100e-4e61-b762-b5b2227a6232 00:32:51.086 01:10:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:51.086 01:10:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:51.086 01:10:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:51.086 01:10:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:51.344 01:10:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:51.344 { 00:32:51.344 "uuid": "88433c86-9597-43c0-85c4-815a26ab633a", 00:32:51.344 "name": "lvs_0", 00:32:51.344 "base_bdev": "Nvme0n1", 00:32:51.344 "total_data_clusters": 930, 00:32:51.344 "free_clusters": 0, 00:32:51.344 "block_size": 512, 00:32:51.344 "cluster_size": 1073741824 00:32:51.344 }, 00:32:51.344 { 00:32:51.344 "uuid": "90033669-100e-4e61-b762-b5b2227a6232", 00:32:51.344 "name": "lvs_n_0", 00:32:51.344 "base_bdev": "a9d44a6b-ec58-4fb1-b53c-0e2851190bcb", 00:32:51.344 "total_data_clusters": 237847, 00:32:51.344 "free_clusters": 237847, 00:32:51.344 "block_size": 512, 00:32:51.344 "cluster_size": 4194304 00:32:51.344 } 00:32:51.344 ]' 00:32:51.344 01:10:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="90033669-100e-4e61-b762-b5b2227a6232") .free_clusters' 00:32:51.344 01:10:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:32:51.344 01:10:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="90033669-100e-4e61-b762-b5b2227a6232") .cluster_size' 00:32:51.344 01:10:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:32:51.344 01:10:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:32:51.344 01:10:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:32:51.344 951388 00:32:51.344 01:10:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:52.718 e05c2ca2-568f-403c-9780-c5da70b0dff4 00:32:52.718 01:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:52.718 01:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:52.975 01:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:53.242 01:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:53.242 01:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:53.242 01:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:53.242 01:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:53.242 01:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:53.242 01:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:53.242 01:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:53.242 01:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:53.242 01:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:53.242 01:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:53.242 01:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:53.242 01:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:53.242 01:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:53.242 01:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:53.242 01:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:53.242 01:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:53.242 01:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:53.499 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:53.499 fio-3.35 00:32:53.499 Starting 1 thread 00:32:56.028 00:32:56.028 test: (groupid=0, jobs=1): err= 0: pid=3091198: Fri Dec 6 01:10:28 2024 00:32:56.028 read: IOPS=4315, BW=16.9MiB/s (17.7MB/s)(33.9MiB/2012msec) 00:32:56.028 slat (usec): min=2, max=179, avg= 3.59, stdev= 2.94 00:32:56.028 clat (usec): min=6103, max=26998, avg=16038.33, stdev=1564.86 00:32:56.028 lat (usec): min=6111, max=27002, avg=16041.92, stdev=1564.74 00:32:56.028 clat percentiles (usec): 00:32:56.028 | 1.00th=[12518], 5.00th=[13698], 10.00th=[14091], 20.00th=[14746], 00:32:56.028 | 30.00th=[15270], 40.00th=[15664], 50.00th=[16057], 60.00th=[16450], 00:32:56.028 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17957], 95.00th=[18482], 00:32:56.028 | 99.00th=[19268], 99.50th=[19530], 99.90th=[23725], 99.95th=[25822], 00:32:56.028 | 99.99th=[26870] 00:32:56.028 bw ( KiB/s): min=16032, max=17736, per=99.77%, avg=17220.00, stdev=797.07, samples=4 00:32:56.028 iops : min= 4008, max= 4434, avg=4305.00, stdev=199.27, samples=4 00:32:56.028 write: IOPS=4311, BW=16.8MiB/s (17.7MB/s)(33.9MiB/2012msec); 0 zone resets 00:32:56.028 slat (usec): min=2, max=110, avg= 3.64, stdev= 1.64 00:32:56.028 clat (usec): min=2866, max=23524, avg=13354.14, stdev=1290.74 00:32:56.028 lat (usec): min=2873, max=23527, avg=13357.78, stdev=1290.64 00:32:56.028 clat percentiles (usec): 00:32:56.028 | 1.00th=[10421], 5.00th=[11469], 10.00th=[11863], 20.00th=[12387], 00:32:56.028 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13698], 00:32:56.028 | 70.00th=[13960], 80.00th=[14353], 90.00th=[14877], 95.00th=[15139], 00:32:56.028 | 99.00th=[16057], 99.50th=[16712], 99.90th=[21890], 99.95th=[23200], 00:32:56.028 | 99.99th=[23462] 00:32:56.028 bw ( KiB/s): min=17048, max=17440, per=99.95%, avg=17238.00, stdev=215.20, samples=4 00:32:56.028 iops : min= 4262, max= 4360, avg=4309.50, stdev=53.80, samples=4 00:32:56.028 lat (msec) : 4=0.02%, 10=0.39%, 20=99.36%, 50=0.23% 00:32:56.028 cpu : usr=66.73%, sys=31.87%, ctx=69, majf=0, minf=1541 00:32:56.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:56.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:56.028 issued rwts: total=8682,8675,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.028 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:56.028 00:32:56.028 Run status group 0 (all jobs): 00:32:56.028 READ: bw=16.9MiB/s (17.7MB/s), 16.9MiB/s-16.9MiB/s (17.7MB/s-17.7MB/s), io=33.9MiB (35.6MB), run=2012-2012msec 00:32:56.028 WRITE: bw=16.8MiB/s (17.7MB/s), 16.8MiB/s-16.8MiB/s (17.7MB/s-17.7MB/s), io=33.9MiB (35.5MB), run=2012-2012msec 00:32:56.285 ----------------------------------------------------- 00:32:56.285 Suppressions used: 00:32:56.285 count bytes template 00:32:56.285 1 58 /usr/src/fio/parse.c 00:32:56.285 1 8 libtcmalloc_minimal.so 00:32:56.285 ----------------------------------------------------- 00:32:56.285 00:32:56.285 01:10:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:56.542 01:10:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:56.542 01:10:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:00.726 01:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:00.984 01:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:04.262 01:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:04.262 01:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:06.160 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:06.160 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:06.160 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:06.160 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:06.160 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:33:06.160 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:06.160 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:33:06.160 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:06.160 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:06.160 rmmod nvme_tcp 00:33:06.160 rmmod nvme_fabrics 00:33:06.160 rmmod nvme_keyring 00:33:06.160 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:06.160 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:33:06.160 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:33:06.160 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3088112 ']' 00:33:06.160 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3088112 00:33:06.160 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3088112 ']' 00:33:06.160 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3088112 00:33:06.160 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:33:06.418 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:06.418 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3088112 00:33:06.418 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:06.418 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:06.418 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3088112' 00:33:06.418 killing process with pid 3088112 00:33:06.418 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3088112 00:33:06.418 01:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3088112 00:33:07.825 01:10:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:07.825 01:10:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:07.825 01:10:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:07.825 01:10:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:33:07.825 01:10:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:33:07.825 01:10:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:33:07.825 01:10:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:07.825 01:10:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:07.825 01:10:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:07.825 01:10:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.825 01:10:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.825 01:10:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.724 01:10:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:09.724 00:33:09.724 real 0m42.495s 00:33:09.724 user 2m41.862s 00:33:09.724 sys 0m8.223s 00:33:09.724 01:10:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:09.724 01:10:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.724 ************************************ 00:33:09.724 END TEST nvmf_fio_host 00:33:09.724 ************************************ 00:33:09.724 01:10:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:09.724 01:10:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:09.724 01:10:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:09.724 01:10:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.724 ************************************ 00:33:09.724 START TEST nvmf_failover 00:33:09.724 ************************************ 00:33:09.724 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:09.724 * Looking for test storage... 00:33:09.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:09.724 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:09.724 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:33:09.724 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:09.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.983 --rc genhtml_branch_coverage=1 00:33:09.983 --rc genhtml_function_coverage=1 00:33:09.983 --rc genhtml_legend=1 00:33:09.983 --rc geninfo_all_blocks=1 00:33:09.983 --rc geninfo_unexecuted_blocks=1 00:33:09.983 00:33:09.983 ' 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:09.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.983 --rc genhtml_branch_coverage=1 00:33:09.983 --rc genhtml_function_coverage=1 00:33:09.983 --rc genhtml_legend=1 00:33:09.983 --rc geninfo_all_blocks=1 00:33:09.983 --rc geninfo_unexecuted_blocks=1 00:33:09.983 00:33:09.983 ' 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:09.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.983 --rc genhtml_branch_coverage=1 00:33:09.983 --rc genhtml_function_coverage=1 00:33:09.983 --rc genhtml_legend=1 00:33:09.983 --rc geninfo_all_blocks=1 00:33:09.983 --rc geninfo_unexecuted_blocks=1 00:33:09.983 00:33:09.983 ' 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:09.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.983 --rc genhtml_branch_coverage=1 00:33:09.983 --rc genhtml_function_coverage=1 00:33:09.983 --rc genhtml_legend=1 00:33:09.983 --rc geninfo_all_blocks=1 00:33:09.983 --rc geninfo_unexecuted_blocks=1 00:33:09.983 00:33:09.983 ' 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.983 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:09.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:33:09.984 01:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:11.882 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:11.882 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:33:11.882 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:11.882 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:11.882 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:11.882 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:11.882 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:11.882 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:11.883 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:11.883 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:11.883 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:11.883 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:11.883 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:12.141 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:12.141 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:12.141 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:12.141 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:12.141 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:12.141 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:12.141 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:12.141 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:12.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:12.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:33:12.141 00:33:12.141 --- 10.0.0.2 ping statistics --- 00:33:12.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.141 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:33:12.141 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:12.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:12.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:33:12.141 00:33:12.141 --- 10.0.0.1 ping statistics --- 00:33:12.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.141 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:33:12.141 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:12.141 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:33:12.141 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:12.141 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:12.141 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:12.141 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:12.141 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:12.141 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:12.141 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:12.400 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:12.400 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:12.400 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:12.400 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:12.400 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3094719 00:33:12.400 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:12.400 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3094719 00:33:12.400 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3094719 ']' 00:33:12.400 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.400 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:12.400 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.400 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:12.400 01:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:12.400 [2024-12-06 01:10:44.957891] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:33:12.400 [2024-12-06 01:10:44.958022] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:12.400 [2024-12-06 01:10:45.100225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:12.657 [2024-12-06 01:10:45.222506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:12.657 [2024-12-06 01:10:45.222606] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:12.657 [2024-12-06 01:10:45.222629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:12.658 [2024-12-06 01:10:45.222650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:12.658 [2024-12-06 01:10:45.222667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:12.658 [2024-12-06 01:10:45.225109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:12.658 [2024-12-06 01:10:45.225151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.658 [2024-12-06 01:10:45.225171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:13.222 01:10:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:13.222 01:10:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:13.222 01:10:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:13.222 01:10:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:13.222 01:10:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:13.480 01:10:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:13.480 01:10:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:13.737 [2024-12-06 01:10:46.200932] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:13.737 01:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:13.994 Malloc0 00:33:13.994 01:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:14.252 01:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:14.509 01:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:14.767 [2024-12-06 01:10:47.457983] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:15.025 01:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:15.282 [2024-12-06 01:10:47.783051] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:15.283 01:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:15.541 [2024-12-06 01:10:48.079900] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:15.541 01:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3095127 00:33:15.541 01:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:15.541 01:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:15.541 01:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3095127 /var/tmp/bdevperf.sock 00:33:15.541 01:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3095127 ']' 00:33:15.541 01:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:15.541 01:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:15.541 01:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:15.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:15.541 01:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:15.541 01:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:16.476 01:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:16.476 01:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:16.476 01:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:17.041 NVMe0n1 00:33:17.041 01:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:17.300 00:33:17.300 01:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3095386 00:33:17.300 01:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:17.300 01:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:18.236 01:10:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:18.494 01:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:21.778 01:10:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:22.037 00:33:22.037 01:10:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:22.295 [2024-12-06 01:10:55.002534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.295 [2024-12-06 01:10:55.002612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.295 [2024-12-06 01:10:55.002635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.295 [2024-12-06 01:10:55.002667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.295 [2024-12-06 01:10:55.002686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.295 [2024-12-06 01:10:55.002720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.295 [2024-12-06 01:10:55.002737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.295 [2024-12-06 01:10:55.002755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.295 [2024-12-06 01:10:55.002772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.002789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.002835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.002855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.002874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.002891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.002908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.002927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.002945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.002963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.002980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.002997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.296 [2024-12-06 01:10:55.003669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.554 [2024-12-06 01:10:55.003691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.554 [2024-12-06 01:10:55.003711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.554 [2024-12-06 01:10:55.003729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.554 [2024-12-06 01:10:55.003747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.554 [2024-12-06 01:10:55.003765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:22.554 01:10:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:25.879 01:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:25.879 [2024-12-06 01:10:58.286894] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:25.879 01:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:26.814 01:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:27.072 [2024-12-06 01:10:59.580310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:27.072 [2024-12-06 01:10:59.580405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:27.072 [2024-12-06 01:10:59.580428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:27.072 [2024-12-06 01:10:59.580448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:27.072 [2024-12-06 01:10:59.580466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:27.072 01:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3095386 00:33:33.640 { 00:33:33.640 "results": [ 00:33:33.640 { 00:33:33.640 "job": "NVMe0n1", 00:33:33.640 "core_mask": "0x1", 00:33:33.640 "workload": "verify", 00:33:33.640 "status": "finished", 00:33:33.640 "verify_range": { 00:33:33.640 "start": 0, 00:33:33.640 "length": 16384 00:33:33.640 }, 00:33:33.640 "queue_depth": 128, 00:33:33.640 "io_size": 4096, 00:33:33.640 "runtime": 15.013245, 00:33:33.640 "iops": 6082.029567891552, 00:33:33.640 "mibps": 23.757927999576374, 00:33:33.640 "io_failed": 12292, 00:33:33.640 "io_timeout": 0, 00:33:33.640 "avg_latency_us": 18512.406550446667, 00:33:33.640 "min_latency_us": 1122.6074074074074, 00:33:33.640 "max_latency_us": 47962.642962962964 00:33:33.640 } 00:33:33.640 ], 00:33:33.640 "core_count": 1 00:33:33.640 } 00:33:33.640 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3095127 00:33:33.640 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3095127 ']' 00:33:33.640 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3095127 00:33:33.640 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:33.640 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:33.640 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3095127 00:33:33.640 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:33.640 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:33.640 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3095127' 00:33:33.640 killing process with pid 3095127 00:33:33.640 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3095127 00:33:33.640 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3095127 00:33:33.640 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:33.640 [2024-12-06 01:10:48.181400] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:33:33.640 [2024-12-06 01:10:48.181554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3095127 ] 00:33:33.640 [2024-12-06 01:10:48.325152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.640 [2024-12-06 01:10:48.450590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.640 Running I/O for 15 seconds... 00:33:33.640 6202.00 IOPS, 24.23 MiB/s [2024-12-06T00:11:06.349Z] [2024-12-06 01:10:51.176054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.176164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.176222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.176256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.176298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.176332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.176373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.176406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.176445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.176478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.176517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.176552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.176589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.176623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.176661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.176695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.176731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.176764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.176838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.176876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.176919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.176955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.177026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.177063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.177126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.177168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.177209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.177245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.177284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.177318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.177358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.177391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.177431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.177465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.177504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.177539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.177578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.177612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.177651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.177686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.177724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.177758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.177797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.177855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.177896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.177931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.177971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.178012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.178054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.178089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.178129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.178177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.178214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.178248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.178288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.178322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.178361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.178396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.178433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.178466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.178522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.178556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.178595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.178630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.178670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.178705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.178746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.178781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.178846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.178884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.178926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.178963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.179010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.179048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.179088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.179140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.179181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.179217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.179259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.640 [2024-12-06 01:10:51.179295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.640 [2024-12-06 01:10:51.179336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:57488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.641 [2024-12-06 01:10:51.179370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.179412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:57496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.641 [2024-12-06 01:10:51.179446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.179487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:57504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.641 [2024-12-06 01:10:51.179523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.179564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.641 [2024-12-06 01:10:51.179599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.179640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.641 [2024-12-06 01:10:51.179674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.179714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.641 [2024-12-06 01:10:51.179750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.179790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.641 [2024-12-06 01:10:51.179850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.179895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.641 [2024-12-06 01:10:51.179931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.179972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.641 [2024-12-06 01:10:51.180010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.180058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.180094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.180153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.180189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.180229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.180264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.180305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.180340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.180381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.180416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.180456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.180493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.180534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.180570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.180611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.180646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.180687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.180722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.180763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.180799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.180863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.180900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.180944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.180980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.181023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.181066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.181108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.181158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.181200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.181234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.181274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.181310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.181351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.181387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.181428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.181464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.181505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.181542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.181583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.181618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.181659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.181695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.181736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.181771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.181812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.181870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.181912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.181950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.181991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.182027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.182075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.182112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.182181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.182218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.182261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.182296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.182337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.182373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.182414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.182449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.182488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.182524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.182564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.182601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.182642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.182678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.182719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.182755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.182810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.182855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.182898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.182934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.182975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.183013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.183055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.183097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.183153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.183190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.183230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.183265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.183306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.183341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.183381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.183418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.183459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.183494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.183537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.183573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.183612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.183648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.183689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.183723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.183764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.641 [2024-12-06 01:10:51.183800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.641 [2024-12-06 01:10:51.183867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:51.183905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.183948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:51.183985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.184029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:51.184065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.184107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:51.184162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.184204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:51.184239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.184279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:51.184315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.184355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:51.184389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.184429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:51.184465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.184506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:51.184541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.184581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:51.184615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.184655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:51.184690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.184730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:51.184764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.184804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:51.184863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.184906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:51.184943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.184984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:51.185019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.185061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:51.185096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.185158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:51.185194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.185235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:51.185270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.185341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.642 [2024-12-06 01:10:51.185379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58456 len:8 PRP1 0x0 PRP2 0x0 00:33:33.642 [2024-12-06 01:10:51.185414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.185455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.642 [2024-12-06 01:10:51.185487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.642 [2024-12-06 01:10:51.185520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58464 len:8 PRP1 0x0 PRP2 0x0 00:33:33.642 [2024-12-06 01:10:51.185563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.185597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.642 [2024-12-06 01:10:51.185627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.642 [2024-12-06 01:10:51.185656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58472 len:8 PRP1 0x0 PRP2 0x0 00:33:33.642 [2024-12-06 01:10:51.185690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.185723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.642 [2024-12-06 01:10:51.185752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.642 [2024-12-06 01:10:51.185782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58480 len:8 PRP1 0x0 PRP2 0x0 00:33:33.642 [2024-12-06 01:10:51.185839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.185876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.642 [2024-12-06 01:10:51.185905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.642 [2024-12-06 01:10:51.185937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58488 len:8 PRP1 0x0 PRP2 0x0 00:33:33.642 [2024-12-06 01:10:51.185971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.186006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.642 [2024-12-06 01:10:51.186036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.642 [2024-12-06 01:10:51.186066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58496 len:8 PRP1 0x0 PRP2 0x0 00:33:33.642 [2024-12-06 01:10:51.186101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.186134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.642 [2024-12-06 01:10:51.186177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.642 [2024-12-06 01:10:51.186206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57560 len:8 PRP1 0x0 PRP2 0x0 00:33:33.642 [2024-12-06 01:10:51.186245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.186280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.642 [2024-12-06 01:10:51.186308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.642 [2024-12-06 01:10:51.186338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57568 len:8 PRP1 0x0 PRP2 0x0 00:33:33.642 [2024-12-06 01:10:51.186369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.186403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.642 [2024-12-06 01:10:51.186433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.642 [2024-12-06 01:10:51.186462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57576 len:8 PRP1 0x0 PRP2 0x0 00:33:33.642 [2024-12-06 01:10:51.186497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.186530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.642 [2024-12-06 01:10:51.186559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.642 [2024-12-06 01:10:51.186592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57584 len:8 PRP1 0x0 PRP2 0x0 00:33:33.642 [2024-12-06 01:10:51.186625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.186659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.642 [2024-12-06 01:10:51.186689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.642 [2024-12-06 01:10:51.186720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57592 len:8 PRP1 0x0 PRP2 0x0 00:33:33.642 [2024-12-06 01:10:51.186751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.186786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.642 [2024-12-06 01:10:51.186839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.642 [2024-12-06 01:10:51.186871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57600 len:8 PRP1 0x0 PRP2 0x0 00:33:33.642 [2024-12-06 01:10:51.186908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.186943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.642 [2024-12-06 01:10:51.186973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.642 [2024-12-06 01:10:51.187004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57608 len:8 PRP1 0x0 PRP2 0x0 00:33:33.642 [2024-12-06 01:10:51.187055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.187094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.642 [2024-12-06 01:10:51.187139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.642 [2024-12-06 01:10:51.187170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58504 len:8 PRP1 0x0 PRP2 0x0 00:33:33.642 [2024-12-06 01:10:51.187201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.187588] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:33.642 [2024-12-06 01:10:51.187687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.642 [2024-12-06 01:10:51.187727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.187769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.642 [2024-12-06 01:10:51.187804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.187852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.642 [2024-12-06 01:10:51.187888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.187925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.642 [2024-12-06 01:10:51.187962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:51.187996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:33:33.642 [2024-12-06 01:10:51.188133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:33.642 [2024-12-06 01:10:51.193663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:33.642 [2024-12-06 01:10:51.275959] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:33:33.642 5942.50 IOPS, 23.21 MiB/s [2024-12-06T00:11:06.351Z] 6062.00 IOPS, 23.68 MiB/s [2024-12-06T00:11:06.351Z] 6121.50 IOPS, 23.91 MiB/s [2024-12-06T00:11:06.351Z] [2024-12-06 01:10:55.005692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.642 [2024-12-06 01:10:55.005760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:55.005845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.642 [2024-12-06 01:10:55.005886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:55.005934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:55.005972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:55.006016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:55.006053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:55.006096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:55.006141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:55.006184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:55.006220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:55.006264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:55.006317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:55.006365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:55.006414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.642 [2024-12-06 01:10:55.006454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.642 [2024-12-06 01:10:55.006491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.006530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.006565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.006619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.006669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.006712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.006748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.006790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.006847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.006888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.006926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.006969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.007007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.007050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.007089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.007140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.007178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.007223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.007260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.007303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.007341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.007384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.007429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.007473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.007510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.007552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.007603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.007643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.007680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.007720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.007756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.007806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.007869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.007927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.007962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.008016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.008051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.008093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.008152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.008192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.008226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.008266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.008307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.008345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.008381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.008421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.008455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.008509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.008544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.008585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.008619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.008670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.008707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.008747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.008782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.008850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.008888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.008930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.008967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.009009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.009045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.009087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.009132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.009197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.009232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.009274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.009310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.009350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.009387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.009429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.009465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.009506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.009543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.009595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.009631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.009672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.009708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.009748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.009784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.009856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.009894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.009936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.009973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.010014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.010050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.010093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.010178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.010221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.010255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.010297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.010333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.010373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.010410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.010450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.010485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.010526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.010561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.010601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.010643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.643 [2024-12-06 01:10:55.010685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.643 [2024-12-06 01:10:55.010719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.010760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.010809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.010860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.010898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.010940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.010977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.011020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.011056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.011097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.011149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.011189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.011225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.011274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.011309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.011358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.011393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.011434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.011470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.011510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.011546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.011587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.011630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.011677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.011713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.011754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.011788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.011863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.011900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.011943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.011980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.012021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.012057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.012099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.012151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.012203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.012239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.012280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.012326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.012367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.012403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.012443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.012479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.012519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.012554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.012595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.012642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.012684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.012720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.012767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.012812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.012881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.012919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.012961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.012998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.013039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.013076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.013127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.013176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.013229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.644 [2024-12-06 01:10:55.013266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.013307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.013343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.013384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.013418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.013459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.013495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.013537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.013573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.013614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.013651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.013692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.013727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.013776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.013851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.013896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.013932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.013974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.014010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.014052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.014089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.014153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.014188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.014237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.014273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.014319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.014354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.014393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.014427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.014469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.014504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.014544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.014580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.014629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.014665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.014705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.014739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.014778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.014837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.014886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.014924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.014966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.015003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.015043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.015080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.015144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.644 [2024-12-06 01:10:55.015185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.015256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.644 [2024-12-06 01:10:55.015295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4376 len:8 PRP1 0x0 PRP2 0x0 00:33:33.644 [2024-12-06 01:10:55.015331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.644 [2024-12-06 01:10:55.015472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.644 [2024-12-06 01:10:55.015524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.015565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.645 [2024-12-06 01:10:55.015604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.015657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.645 [2024-12-06 01:10:55.015693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.015732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.645 [2024-12-06 01:10:55.015768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.015822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2000 is same with the state(6) to be set 00:33:33.645 [2024-12-06 01:10:55.016267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.016302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.016336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.016369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.016410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.016440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.016471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4392 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.016520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.016562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.016591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.016623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4400 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.016655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.016690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.016719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.016750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4408 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.016783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.016845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.016876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.016907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.016941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.016977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.017006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.017038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4424 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.017071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.017107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.017154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.017191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4432 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.017224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.017264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.017294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.017323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4440 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.017357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.017391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.017419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.017449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4448 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.017482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.017516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.017554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.017592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4456 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.017626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.017659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.017688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.017717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4464 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.017750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.017782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.017848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.017880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4472 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.017914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.017949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.017978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.018010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.018042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.018079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.018133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.018163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4488 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.018197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.018232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.018263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.018292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4496 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.018326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.018360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.018389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.018421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4120 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.018455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.018489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.018519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.018549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4128 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.018585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.018619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.018655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.018689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4504 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.018723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.018764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.018827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.018862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.018898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.018932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.018963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.018995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4520 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.019028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.019063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.019092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.019136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4528 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.019181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.019216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.019247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.019291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4536 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.019325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.019364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.019394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.019424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.019457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.019490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.019519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.019550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4552 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.019583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.019618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.019649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.019679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4560 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.019714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.019754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.019784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.019861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4568 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.019899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.019935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.019966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.019996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.020031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.020066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.020097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.020149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4584 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.020181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.020218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.020248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.020279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4592 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.020313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.020345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.020376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.645 [2024-12-06 01:10:55.020414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4600 len:8 PRP1 0x0 PRP2 0x0 00:33:33.645 [2024-12-06 01:10:55.020448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.645 [2024-12-06 01:10:55.020483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.645 [2024-12-06 01:10:55.020513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.020545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.020593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.020628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.020658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.020691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4616 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.020724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.020759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.020788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.020849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4624 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.020892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.020929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.020959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.020990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4632 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.021025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.021060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.021089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.021141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.021173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.021208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.021237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.021267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4648 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.021301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.021333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.021369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.021399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4656 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.021440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.021474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.021503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.021539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4664 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.021574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.021617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.021647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.021677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.021711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.021744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.021772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.021813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4680 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.021872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.021908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.021937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.021979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4688 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.022015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.022051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.022081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.022114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4696 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.022163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.022196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.022225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.022256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.022287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.022329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.022359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.022388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4712 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.022422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.022454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.022485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.022515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4720 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.022549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.022583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.022612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.022649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4728 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.022683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.022716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.022745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.022775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.022824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.022878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.022908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.022940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4744 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.022973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.023010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.023046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.023080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4752 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.023122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.023170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.023209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.023239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4760 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.023273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.023307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.023336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.023368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.023401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.023447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.023476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.023505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4776 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.023539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.023572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.023602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.023646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4784 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.023679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.023715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.023743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.023777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4792 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.023812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.023857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.023887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.023917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.023953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.023988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.024017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.024049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4808 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.024081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.024143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.024183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.024213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4816 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.024246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.024279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.024319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.024350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4824 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.024382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.024425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.024453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.024484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.024520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.024552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.024582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.646 [2024-12-06 01:10:55.024611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4840 len:8 PRP1 0x0 PRP2 0x0 00:33:33.646 [2024-12-06 01:10:55.024645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.646 [2024-12-06 01:10:55.024679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.646 [2024-12-06 01:10:55.024708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.024738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4848 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.024770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.024837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.024870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.024904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4856 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.024938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.024973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.025002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.025034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.025082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.025117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.025165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.025198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4872 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.025237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.025272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.025302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.025333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4880 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.025365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.025399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.025428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.025460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4888 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.025494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.025528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.025556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.025588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.025621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.025654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.025685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.025715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4904 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.025747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.025781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.025843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.025877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4912 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.025911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.025947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.025977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.026007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4920 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.026043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.026078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.026117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.026161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.026205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.026250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.026279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.026315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4936 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.026350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.026383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.026412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.026442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4944 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.026489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.026521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.026550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.026580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4952 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.026611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.026645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.026675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.026704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.026736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.026767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.026826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.026858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4968 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.026906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.026941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.026971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.027003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4976 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.027038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.027072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.027102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.027147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4984 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.027195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.027226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.027255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.027284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.027317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.027355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.027384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.027415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5000 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.027448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.027485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.027514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.027543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5008 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.027577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.027611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.027640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.027670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5016 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.027704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.027739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.027769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.027822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.027861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.027896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.027929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.027963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5032 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.027997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.028034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.028066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.028097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5040 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.028152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.028186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.028217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.028264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5048 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.028297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.028332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.028360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.028400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.028434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.028473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.028501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.028537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5064 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.028569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.028602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.028629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.028658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5072 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.028691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.028723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.028754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.028808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5080 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.028870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.028906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.028935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.028966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.029001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.029038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.029066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.029098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4136 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.029132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.029172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.029202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.029233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4144 len:8 PRP1 0x0 PRP2 0x0 00:33:33.647 [2024-12-06 01:10:55.029282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.647 [2024-12-06 01:10:55.029316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.647 [2024-12-06 01:10:55.029360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.647 [2024-12-06 01:10:55.029400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4152 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.029435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.029471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.029501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.029546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4160 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.029649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.029686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.029717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.029768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4168 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.029827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.029866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.029898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.029929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4176 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.029963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.029997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.030027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.030058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4184 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.030092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.030138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.030167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.030213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5096 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.030252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.030290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.030323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.030354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5104 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.030387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.030440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.030472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.030504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5112 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.030538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.030571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.030600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.030629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.030663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.030696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.030729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.030762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5128 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.030793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.030855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.030902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.030939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5136 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.030975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.031011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.031041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.031072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4192 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.031128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.031179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.031206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.031250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4200 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.031282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.031315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.031344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.031374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4208 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.031407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.031454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.031484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.031514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4216 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.031549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.031584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.031612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.031644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4224 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.031677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.031712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.031752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.031784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4232 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.031832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.031876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.031906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.031938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4240 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.031971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.032006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.032035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.032067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4248 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.032110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.032160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.032198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.032227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.032260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.032292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.032320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.032350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4264 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.032380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.032413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.032440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.032470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4272 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.032503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.032535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.032569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.032597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4280 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.032638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.032671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.032697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.032727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4288 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.032758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.032791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.032851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.032899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4296 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.032940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.032975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.033006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.033038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4304 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.033071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.033120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.039271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.039308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4312 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.039347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.039382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.039410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.039441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4320 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.039472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.039506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.039534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.039563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4328 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.039594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.039626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.039655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.039683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4336 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.039714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.039746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.039773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.039834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4344 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.039883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.039918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.039948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.039978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.040012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.040047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.040077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.040120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4360 len:8 PRP1 0x0 PRP2 0x0 00:33:33.648 [2024-12-06 01:10:55.040167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.648 [2024-12-06 01:10:55.040216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.648 [2024-12-06 01:10:55.040243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.648 [2024-12-06 01:10:55.040274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4368 len:8 PRP1 0x0 PRP2 0x0 00:33:33.649 [2024-12-06 01:10:55.040319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:55.040351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.649 [2024-12-06 01:10:55.040380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.649 [2024-12-06 01:10:55.040409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4376 len:8 PRP1 0x0 PRP2 0x0 00:33:33.649 [2024-12-06 01:10:55.040441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:55.040854] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:33.649 [2024-12-06 01:10:55.040911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:33:33.649 [2024-12-06 01:10:55.041028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:33.649 [2024-12-06 01:10:55.046480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:33:33.649 6085.60 IOPS, 23.77 MiB/s [2024-12-06T00:11:06.358Z] [2024-12-06 01:10:55.162170] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:33:33.649 5981.33 IOPS, 23.36 MiB/s [2024-12-06T00:11:06.358Z] 6036.14 IOPS, 23.58 MiB/s [2024-12-06T00:11:06.358Z] 6061.88 IOPS, 23.68 MiB/s [2024-12-06T00:11:06.358Z] 6065.33 IOPS, 23.69 MiB/s [2024-12-06T00:11:06.358Z] [2024-12-06 01:10:59.581608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.581664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.581742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:124112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.581784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.581849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:124120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.581889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.581936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.581974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.582016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.582054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.582098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.582139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.582195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.582233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.582275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:124160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.582312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.582354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.582392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.582432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:124176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.582468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.582511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.582547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.582588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:124192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.582625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.582666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.582711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.582754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.582791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.582850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.582887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.582930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.582967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.583009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.583046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.583087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.583131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.583184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.583227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.583269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.583316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.583372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.583407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.583448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.583484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.583525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.583569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.583610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.583645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.583685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:124296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.583720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.583759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.583809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.583865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.583902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.583943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:124320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.583980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.584023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:124328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.584058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.584109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.584146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.584207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.584243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.584290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.584325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.584366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.584401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.584441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:124368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.584475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.584527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.584562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.584607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.584643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.584684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.584719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.584760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.584826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.584872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.584908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.584950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.584986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.585027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.585064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.585129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:124432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.585165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.585206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.585241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.585281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.585327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.585369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.585412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.585454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.585490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.649 [2024-12-06 01:10:59.585532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.649 [2024-12-06 01:10:59.585567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.585608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.650 [2024-12-06 01:10:59.585644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.585684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:124488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.650 [2024-12-06 01:10:59.585719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.585761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.650 [2024-12-06 01:10:59.585826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.585892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.650 [2024-12-06 01:10:59.585930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.585972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.650 [2024-12-06 01:10:59.586010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.586054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.650 [2024-12-06 01:10:59.586091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.586151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.650 [2024-12-06 01:10:59.586191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.586231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.650 [2024-12-06 01:10:59.586266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.586306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.650 [2024-12-06 01:10:59.586349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.586406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.650 [2024-12-06 01:10:59.586444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.586484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.650 [2024-12-06 01:10:59.586520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.586561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.650 [2024-12-06 01:10:59.586603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.586661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.650 [2024-12-06 01:10:59.586695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.586736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.650 [2024-12-06 01:10:59.586771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.586843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.650 [2024-12-06 01:10:59.586886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.586928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.650 [2024-12-06 01:10:59.586964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.587007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.650 [2024-12-06 01:10:59.587042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.587084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.587136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.587184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.587219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.587271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.587308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.587349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.587393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.587433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.587474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.587523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.587560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.587601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.587636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.587674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.587711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.587752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.587798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.587873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.587910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.587952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.587989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.588030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:124712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.588066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.588108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.588158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.588202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.588246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.588286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:124736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.588320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.588361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.588395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.588435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.588471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.588509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:124760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.588549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.588602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.588639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.588688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:124776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.588722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.588762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:124784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.588797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.588863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:124792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.588901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.588943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:124800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.588979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.589021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.589056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.589107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.589157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.589199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.589234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.589274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.589310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.589358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:124840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.589402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.589444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.589479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.589520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.589555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.589601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.589636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.589678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.589714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.589754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.589788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.589856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.589893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.589935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.589973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.590014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.590050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.590091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.590135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.590190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.590236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.590276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.650 [2024-12-06 01:10:59.590311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.650 [2024-12-06 01:10:59.590363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.590398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.590438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.590477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.590520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.590556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.590597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.590640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.590682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.590717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.590759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.590831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.590878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.590915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.590958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.590995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.591036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.591073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.591123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.591185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.591226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.591263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.591305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.591340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.591382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.591419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.591459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.591494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.591536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.591571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.591611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.591653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.591702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.591739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.591781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.591843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.591887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.591924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.591967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.592003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.592060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.592097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.592162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.592199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.592239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.592276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.592317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.651 [2024-12-06 01:10:59.592353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.592443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.651 [2024-12-06 01:10:59.592489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.651 [2024-12-06 01:10:59.592521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124616 len:8 PRP1 0x0 PRP2 0x0 00:33:33.651 [2024-12-06 01:10:59.592556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.592991] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:33.651 [2024-12-06 01:10:59.593074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.651 [2024-12-06 01:10:59.593115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.593165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.651 [2024-12-06 01:10:59.593212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.593261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.651 [2024-12-06 01:10:59.593303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.593343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.651 [2024-12-06 01:10:59.593378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.651 [2024-12-06 01:10:59.593427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:33:33.651 [2024-12-06 01:10:59.593539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:33.651 [2024-12-06 01:10:59.599047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:33:33.651 [2024-12-06 01:10:59.762455] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:33:33.651 5966.10 IOPS, 23.31 MiB/s [2024-12-06T00:11:06.360Z] 5993.45 IOPS, 23.41 MiB/s [2024-12-06T00:11:06.360Z] 6031.00 IOPS, 23.56 MiB/s [2024-12-06T00:11:06.360Z] 6052.23 IOPS, 23.64 MiB/s [2024-12-06T00:11:06.360Z] 6066.14 IOPS, 23.70 MiB/s [2024-12-06T00:11:06.360Z] 6086.73 IOPS, 23.78 MiB/s 00:33:33.651 Latency(us) 00:33:33.651 [2024-12-06T00:11:06.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.651 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:33.651 Verification LBA range: start 0x0 length 0x4000 00:33:33.651 NVMe0n1 : 15.01 6082.03 23.76 818.74 0.00 18512.41 1122.61 47962.64 00:33:33.651 [2024-12-06T00:11:06.360Z] =================================================================================================================== 00:33:33.651 [2024-12-06T00:11:06.360Z] Total : 6082.03 23.76 818.74 0.00 18512.41 1122.61 47962.64 00:33:33.651 Received shutdown signal, test time was about 15.000000 seconds 00:33:33.651 00:33:33.651 Latency(us) 00:33:33.651 [2024-12-06T00:11:06.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.651 [2024-12-06T00:11:06.360Z] =================================================================================================================== 00:33:33.651 [2024-12-06T00:11:06.360Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:33.651 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:33.651 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:33.651 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:33.651 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3097237 00:33:33.651 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:33.651 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3097237 /var/tmp/bdevperf.sock 00:33:33.651 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3097237 ']' 00:33:33.651 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:33.651 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:33.651 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:33.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:33.651 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:33.651 01:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:34.584 01:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:34.584 01:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:34.584 01:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:34.584 [2024-12-06 01:11:07.248519] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:34.584 01:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:34.841 [2024-12-06 01:11:07.525443] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:34.841 01:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:35.407 NVMe0n1 00:33:35.407 01:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:35.977 00:33:35.977 01:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:36.235 00:33:36.235 01:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:36.235 01:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:36.492 01:11:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:36.750 01:11:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:40.027 01:11:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:40.027 01:11:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:40.027 01:11:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3098028 00:33:40.027 01:11:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:40.027 01:11:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3098028 00:33:41.398 { 00:33:41.398 "results": [ 00:33:41.398 { 00:33:41.398 "job": "NVMe0n1", 00:33:41.398 "core_mask": "0x1", 00:33:41.398 "workload": "verify", 00:33:41.398 "status": "finished", 00:33:41.398 "verify_range": { 00:33:41.398 "start": 0, 00:33:41.398 "length": 16384 00:33:41.398 }, 00:33:41.398 "queue_depth": 128, 00:33:41.398 "io_size": 4096, 00:33:41.398 "runtime": 1.005416, 00:33:41.398 "iops": 6124.827931920718, 00:33:41.398 "mibps": 23.925109109065303, 00:33:41.398 "io_failed": 0, 00:33:41.398 "io_timeout": 0, 00:33:41.398 "avg_latency_us": 20804.722787821924, 00:33:41.398 "min_latency_us": 2730.6666666666665, 00:33:41.398 "max_latency_us": 18350.08 00:33:41.398 } 00:33:41.398 ], 00:33:41.398 "core_count": 1 00:33:41.398 } 00:33:41.398 01:11:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:41.398 [2024-12-06 01:11:06.052667] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:33:41.398 [2024-12-06 01:11:06.052806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097237 ] 00:33:41.398 [2024-12-06 01:11:06.190519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.398 [2024-12-06 01:11:06.317126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:41.398 [2024-12-06 01:11:09.372226] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:41.398 [2024-12-06 01:11:09.372366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.398 [2024-12-06 01:11:09.372410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.398 [2024-12-06 01:11:09.372454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.398 [2024-12-06 01:11:09.372489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.398 [2024-12-06 01:11:09.372528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.398 [2024-12-06 01:11:09.372566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.398 [2024-12-06 01:11:09.372603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.398 [2024-12-06 01:11:09.372639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.398 [2024-12-06 01:11:09.372675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:33:41.398 [2024-12-06 01:11:09.372810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:33:41.398 [2024-12-06 01:11:09.372891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:41.398 [2024-12-06 01:11:09.422580] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:33:41.398 Running I/O for 1 seconds... 00:33:41.398 6030.00 IOPS, 23.55 MiB/s 00:33:41.398 Latency(us) 00:33:41.398 [2024-12-06T00:11:14.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.398 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:41.398 Verification LBA range: start 0x0 length 0x4000 00:33:41.398 NVMe0n1 : 1.01 6124.83 23.93 0.00 0.00 20804.72 2730.67 18350.08 00:33:41.398 [2024-12-06T00:11:14.107Z] =================================================================================================================== 00:33:41.398 [2024-12-06T00:11:14.107Z] Total : 6124.83 23.93 0.00 0.00 20804.72 2730.67 18350.08 00:33:41.398 01:11:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:41.398 01:11:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:41.398 01:11:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:41.964 01:11:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:41.964 01:11:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:41.964 01:11:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:42.222 01:11:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:45.500 01:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:45.500 01:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:45.500 01:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3097237 00:33:45.500 01:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3097237 ']' 00:33:45.500 01:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3097237 00:33:45.500 01:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:45.759 01:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:45.759 01:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3097237 00:33:45.759 01:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:45.759 01:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:45.759 01:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3097237' 00:33:45.759 killing process with pid 3097237 00:33:45.759 01:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3097237 00:33:45.759 01:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3097237 00:33:46.695 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:46.695 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:46.695 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:46.695 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:46.695 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:46.695 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:46.695 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:33:46.695 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:46.695 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:33:46.695 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:46.695 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:46.695 rmmod nvme_tcp 00:33:46.695 rmmod nvme_fabrics 00:33:46.695 rmmod nvme_keyring 00:33:46.695 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:46.695 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:33:46.695 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:33:46.695 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3094719 ']' 00:33:46.695 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3094719 00:33:46.695 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3094719 ']' 00:33:46.695 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3094719 00:33:46.695 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:46.695 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:46.695 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3094719 00:33:46.954 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:46.954 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:46.954 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3094719' 00:33:46.954 killing process with pid 3094719 00:33:46.954 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3094719 00:33:46.954 01:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3094719 00:33:48.342 01:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:48.342 01:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:48.342 01:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:48.342 01:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:33:48.342 01:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:33:48.342 01:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:48.342 01:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:33:48.342 01:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:48.342 01:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:48.342 01:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.342 01:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:48.342 01:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:50.247 00:33:50.247 real 0m40.353s 00:33:50.247 user 2m22.149s 00:33:50.247 sys 0m6.036s 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:50.247 ************************************ 00:33:50.247 END TEST nvmf_failover 00:33:50.247 ************************************ 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.247 ************************************ 00:33:50.247 START TEST nvmf_host_discovery 00:33:50.247 ************************************ 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:50.247 * Looking for test storage... 00:33:50.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:33:50.247 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:50.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.248 --rc genhtml_branch_coverage=1 00:33:50.248 --rc genhtml_function_coverage=1 00:33:50.248 --rc genhtml_legend=1 00:33:50.248 --rc geninfo_all_blocks=1 00:33:50.248 --rc geninfo_unexecuted_blocks=1 00:33:50.248 00:33:50.248 ' 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:50.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.248 --rc genhtml_branch_coverage=1 00:33:50.248 --rc genhtml_function_coverage=1 00:33:50.248 --rc genhtml_legend=1 00:33:50.248 --rc geninfo_all_blocks=1 00:33:50.248 --rc geninfo_unexecuted_blocks=1 00:33:50.248 00:33:50.248 ' 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:50.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.248 --rc genhtml_branch_coverage=1 00:33:50.248 --rc genhtml_function_coverage=1 00:33:50.248 --rc genhtml_legend=1 00:33:50.248 --rc geninfo_all_blocks=1 00:33:50.248 --rc geninfo_unexecuted_blocks=1 00:33:50.248 00:33:50.248 ' 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:50.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.248 --rc genhtml_branch_coverage=1 00:33:50.248 --rc genhtml_function_coverage=1 00:33:50.248 --rc genhtml_legend=1 00:33:50.248 --rc geninfo_all_blocks=1 00:33:50.248 --rc geninfo_unexecuted_blocks=1 00:33:50.248 00:33:50.248 ' 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:50.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:50.248 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:50.249 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:50.249 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:50.249 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:50.249 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:50.249 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:50.249 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:50.249 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:50.249 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:50.249 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:50.249 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:50.249 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:50.249 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:50.249 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:50.249 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.249 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:50.249 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.249 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:50.249 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:50.249 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:33:50.249 01:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:52.152 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:52.152 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:52.152 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:52.153 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:52.153 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:52.153 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:52.153 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.153 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:52.153 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.153 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:52.153 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:52.153 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:52.411 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:52.411 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:52.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:52.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:33:52.411 00:33:52.411 --- 10.0.0.2 ping statistics --- 00:33:52.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.411 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:52.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:52.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:33:52.411 00:33:52.411 --- 10.0.0.1 ping statistics --- 00:33:52.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.411 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:52.411 01:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:52.411 01:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:52.411 01:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:52.411 01:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:52.412 01:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.412 01:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3100889 00:33:52.412 01:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:52.412 01:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3100889 00:33:52.412 01:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3100889 ']' 00:33:52.412 01:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:52.412 01:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:52.412 01:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:52.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:52.412 01:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:52.412 01:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.412 [2024-12-06 01:11:25.101489] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:33:52.412 [2024-12-06 01:11:25.101626] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:52.670 [2024-12-06 01:11:25.243658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:52.670 [2024-12-06 01:11:25.360946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:52.670 [2024-12-06 01:11:25.361043] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:52.670 [2024-12-06 01:11:25.361065] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:52.670 [2024-12-06 01:11:25.361086] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:52.670 [2024-12-06 01:11:25.361103] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:52.670 [2024-12-06 01:11:25.362531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.605 [2024-12-06 01:11:26.124630] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.605 [2024-12-06 01:11:26.132831] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.605 null0 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.605 null1 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3101039 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3101039 /tmp/host.sock 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3101039 ']' 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:53.605 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:53.605 01:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.605 [2024-12-06 01:11:26.251193] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:33:53.605 [2024-12-06 01:11:26.251352] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3101039 ] 00:33:53.864 [2024-12-06 01:11:26.396336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.864 [2024-12-06 01:11:26.532301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.799 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:54.800 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.058 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.059 [2024-12-06 01:11:27.529009] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:33:55.059 01:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:55.625 [2024-12-06 01:11:28.275947] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:55.625 [2024-12-06 01:11:28.275985] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:55.625 [2024-12-06 01:11:28.276022] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:55.883 [2024-12-06 01:11:28.402505] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:55.883 [2024-12-06 01:11:28.503774] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:55.883 [2024-12-06 01:11:28.505475] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2a00:1 started. 00:33:55.883 [2024-12-06 01:11:28.508037] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:55.883 [2024-12-06 01:11:28.508068] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:55.883 [2024-12-06 01:11:28.514778] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2a00 was disconnected and freed. delete nvme_qpair. 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.142 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.401 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:56.401 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:56.401 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:56.401 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:56.401 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:56.401 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.401 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.401 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.401 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:56.401 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:56.401 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:56.401 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:56.401 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:56.401 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:56.401 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:56.401 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:56.401 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.401 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.401 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:56.401 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:56.402 [2024-12-06 01:11:28.969208] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2c80:1 started. 00:33:56.402 [2024-12-06 01:11:28.975893] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2c80 was disconnected and freed. delete nvme_qpair. 00:33:56.402 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.402 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:56.402 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:56.402 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:56.402 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:56.402 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:56.402 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:56.402 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:56.402 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:56.402 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:56.402 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:56.402 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:56.402 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:56.402 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.402 01:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.402 [2024-12-06 01:11:29.034995] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:56.402 [2024-12-06 01:11:29.035379] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:56.402 [2024-12-06 01:11:29.035469] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:56.402 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.661 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:56.661 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:56.661 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:56.661 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:56.661 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:56.661 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:56.661 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:56.661 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:56.661 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:56.661 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:56.661 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.661 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.661 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:56.661 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:56.661 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.661 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:56.661 01:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:56.661 [2024-12-06 01:11:29.161424] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:56.919 [2024-12-06 01:11:29.423472] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:33:56.919 [2024-12-06 01:11:29.423623] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:56.919 [2024-12-06 01:11:29.423655] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:56.919 [2024-12-06 01:11:29.423671] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:57.486 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:57.486 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:57.487 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:57.487 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:57.487 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:57.487 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.487 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.487 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:57.487 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:57.487 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.746 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:57.746 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:57.746 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:57.746 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:57.746 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:57.746 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:57.746 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:57.746 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:57.746 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:57.746 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:57.746 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:57.746 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:57.746 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.746 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.747 [2024-12-06 01:11:30.248796] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:57.747 [2024-12-06 01:11:30.248895] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:57.747 [2024-12-06 01:11:30.255477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.747 [2024-12-06 01:11:30.255555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:57.747 [2024-12-06 01:11:30.255625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.747 [2024-12-06 01:11:30.255666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.747 [2024-12-06 01:11:30.255717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.747 [2024-12-06 01:11:30.255764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.747 [2024-12-06 01:11:30.255826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.747 [2024-12-06 01:11:30.255881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.747 [2024-12-06 01:11:30.255916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.747 [2024-12-06 01:11:30.265471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.747 [2024-12-06 01:11:30.275518] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.747 [2024-12-06 01:11:30.275565] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.747 [2024-12-06 01:11:30.275587] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.747 [2024-12-06 01:11:30.275609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.747 [2024-12-06 01:11:30.275683] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.747 [2024-12-06 01:11:30.275970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.747 [2024-12-06 01:11:30.276016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.747 [2024-12-06 01:11:30.276056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.747 [2024-12-06 01:11:30.276125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.747 [2024-12-06 01:11:30.276225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.747 [2024-12-06 01:11:30.276275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.747 [2024-12-06 01:11:30.276316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.747 [2024-12-06 01:11:30.276352] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.747 [2024-12-06 01:11:30.276382] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.747 [2024-12-06 01:11:30.276410] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.747 [2024-12-06 01:11:30.285723] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.747 [2024-12-06 01:11:30.285757] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.747 [2024-12-06 01:11:30.285781] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.747 [2024-12-06 01:11:30.285793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.747 [2024-12-06 01:11:30.285854] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.747 [2024-12-06 01:11:30.286004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.747 [2024-12-06 01:11:30.286045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.747 [2024-12-06 01:11:30.286081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.747 [2024-12-06 01:11:30.286163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.747 [2024-12-06 01:11:30.286240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.747 [2024-12-06 01:11:30.286275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.747 [2024-12-06 01:11:30.286306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.747 [2024-12-06 01:11:30.286336] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.747 [2024-12-06 01:11:30.286360] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.747 [2024-12-06 01:11:30.286382] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:57.747 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:57.748 [2024-12-06 01:11:30.295899] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.748 [2024-12-06 01:11:30.295937] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.748 [2024-12-06 01:11:30.295955] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.748 [2024-12-06 01:11:30.295968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.748 [2024-12-06 01:11:30.296014] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnect 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:57.748 ing ctrlr. 00:33:57.748 [2024-12-06 01:11:30.296255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.748 [2024-12-06 01:11:30.296307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.748 [2024-12-06 01:11:30.296359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.748 [2024-12-06 01:11:30.296416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.748 [2024-12-06 01:11:30.296516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.748 [2024-12-06 01:11:30.296559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.748 [2024-12-06 01:11:30.296596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.748 [2024-12-06 01:11:30.296631] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.748 [2024-12-06 01:11:30.296659] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.748 [2024-12-06 01:11:30.296684] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.748 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:57.748 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:57.748 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.748 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:57.748 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.748 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:57.748 [2024-12-06 01:11:30.306055] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.748 [2024-12-06 01:11:30.306093] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.748 [2024-12-06 01:11:30.306126] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.748 [2024-12-06 01:11:30.306140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.748 [2024-12-06 01:11:30.306191] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.748 [2024-12-06 01:11:30.306380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.748 [2024-12-06 01:11:30.306426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.748 [2024-12-06 01:11:30.306469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.748 [2024-12-06 01:11:30.306526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.748 [2024-12-06 01:11:30.306603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.748 [2024-12-06 01:11:30.306643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.748 [2024-12-06 01:11:30.306681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.748 [2024-12-06 01:11:30.306717] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.748 [2024-12-06 01:11:30.306745] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.748 [2024-12-06 01:11:30.306770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.748 [2024-12-06 01:11:30.316232] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.748 [2024-12-06 01:11:30.316271] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.748 [2024-12-06 01:11:30.316290] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.748 [2024-12-06 01:11:30.316303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.748 [2024-12-06 01:11:30.316353] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.748 [2024-12-06 01:11:30.316553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.748 [2024-12-06 01:11:30.316599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.748 [2024-12-06 01:11:30.316641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.748 [2024-12-06 01:11:30.316698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.748 [2024-12-06 01:11:30.316795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.748 [2024-12-06 01:11:30.316879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.748 [2024-12-06 01:11:30.316916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.748 [2024-12-06 01:11:30.316948] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.748 [2024-12-06 01:11:30.316974] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.748 [2024-12-06 01:11:30.316997] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.748 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.748 [2024-12-06 01:11:30.326395] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.748 [2024-12-06 01:11:30.326434] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.748 [2024-12-06 01:11:30.326452] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.748 [2024-12-06 01:11:30.326465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.748 [2024-12-06 01:11:30.326530] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.748 [2024-12-06 01:11:30.326726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.748 [2024-12-06 01:11:30.326771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.748 [2024-12-06 01:11:30.326831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.748 [2024-12-06 01:11:30.326901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.748 [2024-12-06 01:11:30.326970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.748 [2024-12-06 01:11:30.327007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.748 [2024-12-06 01:11:30.327041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.748 [2024-12-06 01:11:30.327075] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.748 [2024-12-06 01:11:30.327116] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.748 [2024-12-06 01:11:30.327138] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.748 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:57.748 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:57.749 [2024-12-06 01:11:30.335300] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:57.749 [2024-12-06 01:11:30.335351] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:57.749 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.008 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:58.008 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:58.008 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:58.008 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:58.008 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:58.008 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:58.008 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:58.008 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.009 01:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.944 [2024-12-06 01:11:31.617023] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:58.944 [2024-12-06 01:11:31.617098] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:58.944 [2024-12-06 01:11:31.617166] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:59.203 [2024-12-06 01:11:31.703418] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:59.203 [2024-12-06 01:11:31.807728] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:33:59.203 [2024-12-06 01:11:31.809259] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x6150001f3e00:1 started. 00:33:59.203 [2024-12-06 01:11:31.811902] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:59.203 [2024-12-06 01:11:31.811964] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:59.203 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.203 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:59.203 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:59.203 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:59.203 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:59.203 [2024-12-06 01:11:31.814722] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x6150001f3e00 was disconnected and freed. delete nvme_qpair. 00:33:59.203 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:59.203 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:59.203 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:59.203 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:59.203 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.203 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.203 request: 00:33:59.203 { 00:33:59.203 "name": "nvme", 00:33:59.203 "trtype": "tcp", 00:33:59.203 "traddr": "10.0.0.2", 00:33:59.203 "adrfam": "ipv4", 00:33:59.203 "trsvcid": "8009", 00:33:59.203 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:59.203 "wait_for_attach": true, 00:33:59.203 "method": "bdev_nvme_start_discovery", 00:33:59.203 "req_id": 1 00:33:59.203 } 00:33:59.203 Got JSON-RPC error response 00:33:59.203 response: 00:33:59.203 { 00:33:59.203 "code": -17, 00:33:59.203 "message": "File exists" 00:33:59.203 } 00:33:59.203 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:59.203 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:59.203 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:59.203 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:59.203 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:59.204 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:59.204 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:59.204 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:59.204 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.204 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:59.204 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.204 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:59.204 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.204 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:59.204 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:59.204 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:59.204 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:59.204 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.204 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.204 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:59.204 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:59.204 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.462 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:59.462 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:59.462 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:59.462 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:59.462 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:59.462 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:59.462 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:59.462 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:59.462 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:59.462 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.462 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.462 request: 00:33:59.462 { 00:33:59.462 "name": "nvme_second", 00:33:59.462 "trtype": "tcp", 00:33:59.462 "traddr": "10.0.0.2", 00:33:59.463 "adrfam": "ipv4", 00:33:59.463 "trsvcid": "8009", 00:33:59.463 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:59.463 "wait_for_attach": true, 00:33:59.463 "method": "bdev_nvme_start_discovery", 00:33:59.463 "req_id": 1 00:33:59.463 } 00:33:59.463 Got JSON-RPC error response 00:33:59.463 response: 00:33:59.463 { 00:33:59.463 "code": -17, 00:33:59.463 "message": "File exists" 00:33:59.463 } 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:59.463 01:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.463 01:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:59.463 01:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:59.463 01:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:59.463 01:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:59.463 01:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:59.463 01:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:59.463 01:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:59.463 01:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:59.463 01:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:59.463 01:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.463 01:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:00.397 [2024-12-06 01:11:33.027650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.397 [2024-12-06 01:11:33.027745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4080 with addr=10.0.0.2, port=8010 00:34:00.397 [2024-12-06 01:11:33.027882] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:00.397 [2024-12-06 01:11:33.027921] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:00.397 [2024-12-06 01:11:33.027977] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:01.332 [2024-12-06 01:11:34.030183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.332 [2024-12-06 01:11:34.030284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=8010 00:34:01.332 [2024-12-06 01:11:34.030397] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:01.332 [2024-12-06 01:11:34.030432] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:01.332 [2024-12-06 01:11:34.030475] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:02.775 [2024-12-06 01:11:35.032157] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:02.775 request: 00:34:02.775 { 00:34:02.775 "name": "nvme_second", 00:34:02.775 "trtype": "tcp", 00:34:02.775 "traddr": "10.0.0.2", 00:34:02.775 "adrfam": "ipv4", 00:34:02.775 "trsvcid": "8010", 00:34:02.775 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:02.775 "wait_for_attach": false, 00:34:02.775 "attach_timeout_ms": 3000, 00:34:02.775 "method": "bdev_nvme_start_discovery", 00:34:02.775 "req_id": 1 00:34:02.775 } 00:34:02.775 Got JSON-RPC error response 00:34:02.775 response: 00:34:02.775 { 00:34:02.775 "code": -110, 00:34:02.775 "message": "Connection timed out" 00:34:02.775 } 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3101039 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:02.775 rmmod nvme_tcp 00:34:02.775 rmmod nvme_fabrics 00:34:02.775 rmmod nvme_keyring 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3100889 ']' 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3100889 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3100889 ']' 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3100889 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3100889 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3100889' 00:34:02.775 killing process with pid 3100889 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3100889 00:34:02.775 01:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3100889 00:34:03.707 01:11:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:03.707 01:11:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:03.707 01:11:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:03.707 01:11:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:34:03.707 01:11:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:34:03.707 01:11:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:03.707 01:11:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:34:03.707 01:11:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:03.707 01:11:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:03.707 01:11:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.707 01:11:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:03.707 01:11:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:06.244 00:34:06.244 real 0m15.635s 00:34:06.244 user 0m23.234s 00:34:06.244 sys 0m3.084s 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.244 ************************************ 00:34:06.244 END TEST nvmf_host_discovery 00:34:06.244 ************************************ 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.244 ************************************ 00:34:06.244 START TEST nvmf_host_multipath_status 00:34:06.244 ************************************ 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:06.244 * Looking for test storage... 00:34:06.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:34:06.244 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:06.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.245 --rc genhtml_branch_coverage=1 00:34:06.245 --rc genhtml_function_coverage=1 00:34:06.245 --rc genhtml_legend=1 00:34:06.245 --rc geninfo_all_blocks=1 00:34:06.245 --rc geninfo_unexecuted_blocks=1 00:34:06.245 00:34:06.245 ' 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:06.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.245 --rc genhtml_branch_coverage=1 00:34:06.245 --rc genhtml_function_coverage=1 00:34:06.245 --rc genhtml_legend=1 00:34:06.245 --rc geninfo_all_blocks=1 00:34:06.245 --rc geninfo_unexecuted_blocks=1 00:34:06.245 00:34:06.245 ' 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:06.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.245 --rc genhtml_branch_coverage=1 00:34:06.245 --rc genhtml_function_coverage=1 00:34:06.245 --rc genhtml_legend=1 00:34:06.245 --rc geninfo_all_blocks=1 00:34:06.245 --rc geninfo_unexecuted_blocks=1 00:34:06.245 00:34:06.245 ' 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:06.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.245 --rc genhtml_branch_coverage=1 00:34:06.245 --rc genhtml_function_coverage=1 00:34:06.245 --rc genhtml_legend=1 00:34:06.245 --rc geninfo_all_blocks=1 00:34:06.245 --rc geninfo_unexecuted_blocks=1 00:34:06.245 00:34:06.245 ' 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:06.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:34:06.245 01:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:08.173 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:08.173 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:34:08.173 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:08.173 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:08.173 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:08.173 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:08.173 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:08.173 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:34:08.173 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:08.173 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:34:08.173 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:34:08.173 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:34:08.173 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:34:08.173 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:08.174 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:08.174 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:08.174 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:08.174 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:08.174 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:08.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:08.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:34:08.175 00:34:08.175 --- 10.0.0.2 ping statistics --- 00:34:08.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.175 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:08.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:08.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:34:08.175 00:34:08.175 --- 10.0.0.1 ping statistics --- 00:34:08.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.175 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3104340 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3104340 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3104340 ']' 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:08.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:08.175 01:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:08.434 [2024-12-06 01:11:40.945326] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:34:08.434 [2024-12-06 01:11:40.945481] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:08.434 [2024-12-06 01:11:41.106474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:08.692 [2024-12-06 01:11:41.244876] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:08.692 [2024-12-06 01:11:41.244968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:08.692 [2024-12-06 01:11:41.245002] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:08.692 [2024-12-06 01:11:41.245032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:08.692 [2024-12-06 01:11:41.245057] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:08.692 [2024-12-06 01:11:41.250861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:08.692 [2024-12-06 01:11:41.250874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:09.256 01:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:09.256 01:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:09.256 01:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:09.256 01:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:09.257 01:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:09.257 01:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:09.257 01:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3104340 00:34:09.257 01:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:09.514 [2024-12-06 01:11:42.218018] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:09.772 01:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:10.028 Malloc0 00:34:10.028 01:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:10.287 01:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:10.545 01:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:10.802 [2024-12-06 01:11:43.442595] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:10.802 01:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:11.061 [2024-12-06 01:11:43.707276] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:11.061 01:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3104634 00:34:11.061 01:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:11.061 01:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:11.061 01:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3104634 /var/tmp/bdevperf.sock 00:34:11.061 01:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3104634 ']' 00:34:11.061 01:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:11.061 01:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.061 01:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:11.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:11.061 01:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.061 01:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:12.436 01:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:12.436 01:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:12.436 01:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:12.436 01:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:13.001 Nvme0n1 00:34:13.001 01:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:13.259 Nvme0n1 00:34:13.259 01:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:13.259 01:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:15.789 01:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:15.789 01:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:15.789 01:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:16.047 01:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:16.982 01:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:16.982 01:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:16.982 01:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.982 01:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:17.240 01:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.240 01:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:17.240 01:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.240 01:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:17.498 01:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:17.498 01:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:17.498 01:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.498 01:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:17.756 01:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.756 01:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:17.756 01:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.756 01:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:18.014 01:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.014 01:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:18.014 01:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.014 01:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:18.272 01:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.272 01:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:18.272 01:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.272 01:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:18.530 01:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.530 01:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:18.530 01:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:19.097 01:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:19.097 01:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:20.469 01:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:20.469 01:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:20.469 01:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.470 01:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:20.470 01:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:20.470 01:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:20.470 01:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.470 01:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:20.727 01:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:20.727 01:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:20.727 01:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.727 01:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:20.985 01:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:20.985 01:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:20.985 01:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.985 01:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:21.242 01:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:21.242 01:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:21.242 01:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.242 01:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:21.500 01:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:21.500 01:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:21.500 01:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.500 01:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:22.065 01:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:22.065 01:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:22.065 01:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:22.065 01:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:22.324 01:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:23.695 01:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:23.695 01:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:23.695 01:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.695 01:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:23.695 01:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:23.695 01:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:23.695 01:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.695 01:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:23.952 01:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:23.952 01:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:23.952 01:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.952 01:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:24.209 01:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.209 01:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:24.209 01:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.209 01:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:24.467 01:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.467 01:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:24.467 01:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.467 01:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:24.725 01:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.725 01:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:24.725 01:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.725 01:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:24.984 01:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.984 01:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:24.984 01:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:25.550 01:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:25.550 01:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:26.925 01:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:26.925 01:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:26.925 01:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.925 01:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:26.925 01:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:26.925 01:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:26.925 01:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.925 01:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:27.184 01:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:27.184 01:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:27.184 01:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.184 01:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:27.442 01:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.442 01:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:27.442 01:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.442 01:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:27.700 01:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.700 01:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:27.700 01:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.700 01:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:28.266 01:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.266 01:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:28.266 01:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.266 01:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:28.523 01:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:28.523 01:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:28.523 01:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:28.781 01:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:29.039 01:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:29.972 01:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:29.972 01:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:29.972 01:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.972 01:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:30.231 01:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:30.231 01:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:30.231 01:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.231 01:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:30.489 01:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:30.489 01:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:30.489 01:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.489 01:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:30.747 01:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.747 01:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:30.747 01:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.747 01:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:31.005 01:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.005 01:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:31.005 01:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.005 01:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:31.263 01:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:31.263 01:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:31.263 01:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.263 01:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:31.521 01:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:31.521 01:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:31.521 01:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:32.100 01:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:32.100 01:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:33.472 01:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:33.472 01:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:33.472 01:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.472 01:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:33.472 01:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:33.472 01:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:33.472 01:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.472 01:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:33.730 01:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.730 01:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:33.730 01:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.730 01:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:33.988 01:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.988 01:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:33.988 01:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.988 01:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:34.246 01:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.246 01:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:34.246 01:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.246 01:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:34.504 01:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:34.504 01:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:34.504 01:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.504 01:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:35.071 01:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.071 01:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:35.364 01:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:35.364 01:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:35.668 01:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:35.947 01:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:36.881 01:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:36.881 01:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:36.881 01:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.881 01:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:37.139 01:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.139 01:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:37.139 01:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.139 01:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:37.397 01:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.397 01:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:37.397 01:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.397 01:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:37.655 01:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.655 01:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:37.655 01:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.655 01:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:37.913 01:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.913 01:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:37.913 01:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.913 01:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:38.171 01:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.171 01:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:38.171 01:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.171 01:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:38.429 01:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.429 01:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:38.429 01:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:38.686 01:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:38.943 01:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:40.314 01:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:40.314 01:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:40.314 01:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.314 01:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:40.314 01:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:40.314 01:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:40.314 01:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.314 01:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:40.571 01:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.571 01:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:40.571 01:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.571 01:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:40.828 01:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.828 01:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:40.829 01:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.829 01:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:41.086 01:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.086 01:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:41.086 01:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.086 01:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:41.342 01:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.342 01:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:41.342 01:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.342 01:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:41.599 01:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.599 01:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:41.600 01:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:41.857 01:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:42.115 01:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:43.487 01:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:43.487 01:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:43.487 01:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.487 01:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:43.487 01:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.487 01:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:43.487 01:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.487 01:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:43.745 01:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.745 01:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:43.745 01:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.745 01:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:44.003 01:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.003 01:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:44.003 01:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.003 01:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:44.261 01:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.261 01:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:44.261 01:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.261 01:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:44.520 01:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.520 01:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:44.520 01:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.520 01:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:45.086 01:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.086 01:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:45.086 01:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:45.086 01:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:45.343 01:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:46.719 01:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:46.720 01:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:46.720 01:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.720 01:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:46.720 01:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.720 01:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:46.720 01:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.720 01:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:46.979 01:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:46.979 01:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:46.979 01:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.979 01:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:47.237 01:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.237 01:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:47.237 01:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.237 01:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:47.495 01:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.495 01:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:47.495 01:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.495 01:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:48.061 01:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.061 01:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:48.061 01:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.061 01:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:48.319 01:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:48.319 01:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3104634 00:34:48.319 01:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3104634 ']' 00:34:48.319 01:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3104634 00:34:48.319 01:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:48.319 01:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:48.319 01:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3104634 00:34:48.319 01:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:34:48.319 01:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:34:48.319 01:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3104634' 00:34:48.319 killing process with pid 3104634 00:34:48.319 01:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3104634 00:34:48.319 01:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3104634 00:34:48.319 { 00:34:48.319 "results": [ 00:34:48.319 { 00:34:48.319 "job": "Nvme0n1", 00:34:48.319 "core_mask": "0x4", 00:34:48.319 "workload": "verify", 00:34:48.319 "status": "terminated", 00:34:48.319 "verify_range": { 00:34:48.319 "start": 0, 00:34:48.319 "length": 16384 00:34:48.319 }, 00:34:48.319 "queue_depth": 128, 00:34:48.319 "io_size": 4096, 00:34:48.319 "runtime": 34.68673, 00:34:48.319 "iops": 5874.811491310942, 00:34:48.319 "mibps": 22.948482387933367, 00:34:48.319 "io_failed": 0, 00:34:48.319 "io_timeout": 0, 00:34:48.319 "avg_latency_us": 21748.943212929975, 00:34:48.319 "min_latency_us": 1298.5837037037038, 00:34:48.319 "max_latency_us": 4076242.1096296296 00:34:48.319 } 00:34:48.319 ], 00:34:48.319 "core_count": 1 00:34:48.319 } 00:34:49.268 01:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3104634 00:34:49.268 01:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:49.268 [2024-12-06 01:11:43.809161] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:34:49.268 [2024-12-06 01:11:43.809299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3104634 ] 00:34:49.268 [2024-12-06 01:11:43.947439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:49.268 [2024-12-06 01:11:44.072113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:49.268 Running I/O for 90 seconds... 00:34:49.268 6194.00 IOPS, 24.20 MiB/s [2024-12-06T00:12:21.977Z] 6318.00 IOPS, 24.68 MiB/s [2024-12-06T00:12:21.977Z] 6315.67 IOPS, 24.67 MiB/s [2024-12-06T00:12:21.977Z] 6315.75 IOPS, 24.67 MiB/s [2024-12-06T00:12:21.977Z] 6289.60 IOPS, 24.57 MiB/s [2024-12-06T00:12:21.977Z] 6264.50 IOPS, 24.47 MiB/s [2024-12-06T00:12:21.977Z] 6241.86 IOPS, 24.38 MiB/s [2024-12-06T00:12:21.977Z] 6251.38 IOPS, 24.42 MiB/s [2024-12-06T00:12:21.977Z] 6256.00 IOPS, 24.44 MiB/s [2024-12-06T00:12:21.977Z] 6264.80 IOPS, 24.47 MiB/s [2024-12-06T00:12:21.977Z] 6267.73 IOPS, 24.48 MiB/s [2024-12-06T00:12:21.977Z] 6267.75 IOPS, 24.48 MiB/s [2024-12-06T00:12:21.977Z] 6251.38 IOPS, 24.42 MiB/s [2024-12-06T00:12:21.977Z] 6246.36 IOPS, 24.40 MiB/s [2024-12-06T00:12:21.977Z] 6242.13 IOPS, 24.38 MiB/s [2024-12-06T00:12:21.977Z] [2024-12-06 01:12:01.284542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.268 [2024-12-06 01:12:01.284625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.268 [2024-12-06 01:12:01.284680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.268 [2024-12-06 01:12:01.284706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.268 [2024-12-06 01:12:01.284744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.268 [2024-12-06 01:12:01.284769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.268 [2024-12-06 01:12:01.284839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.268 [2024-12-06 01:12:01.284866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.268 [2024-12-06 01:12:01.284919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.268 [2024-12-06 01:12:01.284945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.268 [2024-12-06 01:12:01.284982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.268 [2024-12-06 01:12:01.285009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.268 [2024-12-06 01:12:01.285046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.268 [2024-12-06 01:12:01.285072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.268 [2024-12-06 01:12:01.285134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.268 [2024-12-06 01:12:01.285159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.268 [2024-12-06 01:12:01.285210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.268 [2024-12-06 01:12:01.285235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.268 [2024-12-06 01:12:01.285285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.268 [2024-12-06 01:12:01.285310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.268 [2024-12-06 01:12:01.285344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.268 [2024-12-06 01:12:01.285369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.285404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.269 [2024-12-06 01:12:01.285427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.285462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.269 [2024-12-06 01:12:01.285486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.285522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.269 [2024-12-06 01:12:01.285546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.285581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.269 [2024-12-06 01:12:01.285606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.285641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.269 [2024-12-06 01:12:01.285665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.285699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.269 [2024-12-06 01:12:01.285724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.285758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.269 [2024-12-06 01:12:01.285782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.285852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.269 [2024-12-06 01:12:01.285880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.285917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.269 [2024-12-06 01:12:01.285943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.285979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.269 [2024-12-06 01:12:01.286005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.286351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.269 [2024-12-06 01:12:01.286385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.286423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.269 [2024-12-06 01:12:01.286448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.286482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.269 [2024-12-06 01:12:01.286506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.286541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.269 [2024-12-06 01:12:01.286564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.286599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.269 [2024-12-06 01:12:01.286623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.286657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.269 [2024-12-06 01:12:01.286681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.286715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.269 [2024-12-06 01:12:01.286738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.286775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.269 [2024-12-06 01:12:01.286832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.286889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.269 [2024-12-06 01:12:01.286916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.286953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.269 [2024-12-06 01:12:01.286984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.287044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.269 [2024-12-06 01:12:01.287075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.287126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.269 [2024-12-06 01:12:01.287152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.287189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.269 [2024-12-06 01:12:01.287220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.287258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.269 [2024-12-06 01:12:01.287294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.287349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.269 [2024-12-06 01:12:01.287375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.287411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.269 [2024-12-06 01:12:01.287435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.287472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.269 [2024-12-06 01:12:01.287497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.288321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.269 [2024-12-06 01:12:01.288359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.288418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.269 [2024-12-06 01:12:01.288447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.288486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.269 [2024-12-06 01:12:01.288512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.288549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.269 [2024-12-06 01:12:01.288595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.288632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.269 [2024-12-06 01:12:01.288666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.288705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.269 [2024-12-06 01:12:01.288731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.288767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.269 [2024-12-06 01:12:01.288792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.288856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.269 [2024-12-06 01:12:01.288888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.288928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.269 [2024-12-06 01:12:01.288956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.288994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.269 [2024-12-06 01:12:01.289020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.289057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.269 [2024-12-06 01:12:01.289084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.269 [2024-12-06 01:12:01.289137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.289162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.289213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.289237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.289294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.289323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.289362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.289388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.289423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.289448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.289484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.289510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.289545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.289587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.289623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.289647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.289682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.289707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.289746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.289771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.289829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.289856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.289895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.289921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.289958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.289984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.290021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.290062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.290116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.290141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.290191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.290216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.290251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.290275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.290310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.290334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.290369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.290393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.290427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.290451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.290484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.290509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.290548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.290572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.290606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.290630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.290665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.290689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.290722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.290746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.290780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.290828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.290869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.290894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.290931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.290956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.290992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.291018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.291060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.291092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.291156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.291181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.291217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.291242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.291276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.291301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.291360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.291385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.291420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.291444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.291479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.291513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.291547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.291572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.291607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.291631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.270 [2024-12-06 01:12:01.291665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.270 [2024-12-06 01:12:01.291689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.291724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.271 [2024-12-06 01:12:01.291747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.291781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.271 [2024-12-06 01:12:01.291841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.291894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.271 [2024-12-06 01:12:01.291921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.291959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.271 [2024-12-06 01:12:01.291984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.292021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.271 [2024-12-06 01:12:01.292047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.292084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.271 [2024-12-06 01:12:01.292127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.292164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.271 [2024-12-06 01:12:01.292210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.292274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.271 [2024-12-06 01:12:01.292301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.292338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.271 [2024-12-06 01:12:01.292363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.292400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.271 [2024-12-06 01:12:01.292425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.292461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.271 [2024-12-06 01:12:01.292486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.292523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.271 [2024-12-06 01:12:01.292566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.292617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.271 [2024-12-06 01:12:01.292641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.292676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.271 [2024-12-06 01:12:01.292700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.292735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.271 [2024-12-06 01:12:01.292759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.292826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.271 [2024-12-06 01:12:01.292853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.293754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.271 [2024-12-06 01:12:01.293786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.293846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.293873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.293910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.293942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.293979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.294005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.294041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.294068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.294122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.294148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.294197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.294221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.294255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.294278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.294312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.294335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.294369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.294393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.294427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.294451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.294484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.294507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.294541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.294565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.294598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.294622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.294656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.294679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.294718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.294743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.294777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.294831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.294870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.294896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.294932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.294958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.294993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.295019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.295056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.295081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.295141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.295165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.271 [2024-12-06 01:12:01.295199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.271 [2024-12-06 01:12:01.295235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.295272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.272 [2024-12-06 01:12:01.295296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.295330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.295354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.295388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.295412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.295446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.295470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.295509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.295533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.295568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.295592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.295626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.295650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.295684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.295709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.295743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.295767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.295824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.295851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.295905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.295931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.295968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.295994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.296030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.296056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.296093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.296128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.296179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.296204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.296254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.296278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.296317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.296341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.296376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.296400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.296434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.296458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.296492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.296515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.296548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.296572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.296607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.296630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.296664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.272 [2024-12-06 01:12:01.296688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.296722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.272 [2024-12-06 01:12:01.296746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.296780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.272 [2024-12-06 01:12:01.296840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.296894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.272 [2024-12-06 01:12:01.296920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.296957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.272 [2024-12-06 01:12:01.296983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.297019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.272 [2024-12-06 01:12:01.297044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.297109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.272 [2024-12-06 01:12:01.297145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.297186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.272 [2024-12-06 01:12:01.297211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.297248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.272 [2024-12-06 01:12:01.297273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.297309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.272 [2024-12-06 01:12:01.297334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.297386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.272 [2024-12-06 01:12:01.297411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.297461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.272 [2024-12-06 01:12:01.297484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.297518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.272 [2024-12-06 01:12:01.297541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.272 [2024-12-06 01:12:01.297575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.272 [2024-12-06 01:12:01.297599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.297632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.273 [2024-12-06 01:12:01.297656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.297691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.273 [2024-12-06 01:12:01.297715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.298688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.273 [2024-12-06 01:12:01.298742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.298788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.273 [2024-12-06 01:12:01.298821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.298861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.273 [2024-12-06 01:12:01.298892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.298931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.298956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.298992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.299033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.299068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.299123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.299183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.299209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.299252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.299277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.299314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.299340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.299376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.299416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.299451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.299475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.299509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.299533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.299568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.299592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.299627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.299652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.299686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.299732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.299778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.299828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.299869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.299895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.299933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.299958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.299995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.300021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.300058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.300084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.300135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.300160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.300195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.300221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.300256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.300281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.300323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.300348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.300384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.300409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.300459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.300500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.300537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.300562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.300602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.300627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.300661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.300685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.300719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.300745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.300780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.300812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.300855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.273 [2024-12-06 01:12:01.300881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.273 [2024-12-06 01:12:01.300916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.300941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.300975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.300999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.301034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.301059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.301094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.301118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.301171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.301196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.301231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.301256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.301291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.301316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.301358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.301384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.301434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.301459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.301493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.301520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.301570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.301596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.301630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.301654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.301688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.301712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.301748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.301772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.301831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.301873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.301922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.301948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.301985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.302011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.302047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.302079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.302116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.302141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.302193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.302237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.302274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.302298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.302346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.302370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.302403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.302426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.302459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.302483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.302516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.302539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.302596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.302638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.302685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.302711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.302748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.302773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.302809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.302845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.302883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.302909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.302947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.302973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.303009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.303043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.303082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.303123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.303173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.303203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.304112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.304145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.304187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.274 [2024-12-06 01:12:01.304213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.304265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.274 [2024-12-06 01:12:01.304289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.274 [2024-12-06 01:12:01.304323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.274 [2024-12-06 01:12:01.304346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.304395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.275 [2024-12-06 01:12:01.304419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.304453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.275 [2024-12-06 01:12:01.304475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.304508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.275 [2024-12-06 01:12:01.304531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.304564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.275 [2024-12-06 01:12:01.304587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.304619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.275 [2024-12-06 01:12:01.304641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.304675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.275 [2024-12-06 01:12:01.304702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.304736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.275 [2024-12-06 01:12:01.304781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.304848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.275 [2024-12-06 01:12:01.304875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.304926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.275 [2024-12-06 01:12:01.304952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.304988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.275 [2024-12-06 01:12:01.305013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.305049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.275 [2024-12-06 01:12:01.305073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.305124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.275 [2024-12-06 01:12:01.305150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.305198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.275 [2024-12-06 01:12:01.305222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.305255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.275 [2024-12-06 01:12:01.305277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.305309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.275 [2024-12-06 01:12:01.305332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.305364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.275 [2024-12-06 01:12:01.305387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.305420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.275 [2024-12-06 01:12:01.305443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.305475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.275 [2024-12-06 01:12:01.305498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.305536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.275 [2024-12-06 01:12:01.305559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.305591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.275 [2024-12-06 01:12:01.305625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.305660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.275 [2024-12-06 01:12:01.305698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.305732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.275 [2024-12-06 01:12:01.305755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.305789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.275 [2024-12-06 01:12:01.305837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.305874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.275 [2024-12-06 01:12:01.305900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.305935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.275 [2024-12-06 01:12:01.305960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.305996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.275 [2024-12-06 01:12:01.306020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.306056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.275 [2024-12-06 01:12:01.306081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.306116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.275 [2024-12-06 01:12:01.306156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.306189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.275 [2024-12-06 01:12:01.306213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.306247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.275 [2024-12-06 01:12:01.306270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.306309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.275 [2024-12-06 01:12:01.306333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.306367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.275 [2024-12-06 01:12:01.306390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.306424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.275 [2024-12-06 01:12:01.306447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.306486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.275 [2024-12-06 01:12:01.306509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.306543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.275 [2024-12-06 01:12:01.306567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.306600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.275 [2024-12-06 01:12:01.306624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.306656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.275 [2024-12-06 01:12:01.306680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.306713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.275 [2024-12-06 01:12:01.306737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.275 [2024-12-06 01:12:01.306770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.276 [2024-12-06 01:12:01.306809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.306859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.276 [2024-12-06 01:12:01.306885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.306920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.276 [2024-12-06 01:12:01.306945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.306981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.276 [2024-12-06 01:12:01.307006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.307042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.276 [2024-12-06 01:12:01.307072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.307113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.276 [2024-12-06 01:12:01.307154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.307188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.276 [2024-12-06 01:12:01.307212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.307245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.276 [2024-12-06 01:12:01.307268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.307301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.276 [2024-12-06 01:12:01.307324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.307357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.276 [2024-12-06 01:12:01.307381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.307413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.276 [2024-12-06 01:12:01.307437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.307475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.276 [2024-12-06 01:12:01.307499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.307533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.276 [2024-12-06 01:12:01.307557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.307590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.276 [2024-12-06 01:12:01.307614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.307647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.276 [2024-12-06 01:12:01.307688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.307760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.276 [2024-12-06 01:12:01.307788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.307840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.276 [2024-12-06 01:12:01.307873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.307910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.276 [2024-12-06 01:12:01.307935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.307971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.276 [2024-12-06 01:12:01.307997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.308894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.276 [2024-12-06 01:12:01.308928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.308970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.276 [2024-12-06 01:12:01.308997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.309033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.276 [2024-12-06 01:12:01.309058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.309100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.276 [2024-12-06 01:12:01.309125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.309176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.276 [2024-12-06 01:12:01.309200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.309233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.276 [2024-12-06 01:12:01.309256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.309289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.276 [2024-12-06 01:12:01.309312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.309346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.276 [2024-12-06 01:12:01.309369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.309403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.276 [2024-12-06 01:12:01.309426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.309460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.276 [2024-12-06 01:12:01.309488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.309523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.276 [2024-12-06 01:12:01.309546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.309598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.276 [2024-12-06 01:12:01.309628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.309664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.276 [2024-12-06 01:12:01.309689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.309723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.276 [2024-12-06 01:12:01.309747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.309781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.276 [2024-12-06 01:12:01.309830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.309869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.276 [2024-12-06 01:12:01.309894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.309929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.276 [2024-12-06 01:12:01.309954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.309990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.276 [2024-12-06 01:12:01.310015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.310050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.276 [2024-12-06 01:12:01.310075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.276 [2024-12-06 01:12:01.310133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.276 [2024-12-06 01:12:01.310157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.310191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.310239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.310279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.310304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.310343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.310369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.310403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.310427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.310462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.310486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.310536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.310560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.310593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.310629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.310665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.310689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.310722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.310746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.310779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.310827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.310867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.310892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.310927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.310952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.310988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.311013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.311049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.311074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.311138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.311162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.311195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.311218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.311252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.311275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.311308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.311332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.311372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.311396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.311429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.311452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.311493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.311517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.311551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.311574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.311608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.311631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.311665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.311689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.311722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.311746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.311783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.311832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.311886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.311916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.311953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.311978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.312014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.312052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.312093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.312128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.312178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.312203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.312238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.312262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.312312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.312335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.277 [2024-12-06 01:12:01.312369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.277 [2024-12-06 01:12:01.312392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.312425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.278 [2024-12-06 01:12:01.312448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.312481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.278 [2024-12-06 01:12:01.312504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.312537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.278 [2024-12-06 01:12:01.312562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.312595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.278 [2024-12-06 01:12:01.312618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.312658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.278 [2024-12-06 01:12:01.312685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.312720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.278 [2024-12-06 01:12:01.312743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.312776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.278 [2024-12-06 01:12:01.312827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.312884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.278 [2024-12-06 01:12:01.312910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.312946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.278 [2024-12-06 01:12:01.312984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.313033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.278 [2024-12-06 01:12:01.313060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.313105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.278 [2024-12-06 01:12:01.313131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.313168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.278 [2024-12-06 01:12:01.313193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.314018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.278 [2024-12-06 01:12:01.314050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.314093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.278 [2024-12-06 01:12:01.314124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.314176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.278 [2024-12-06 01:12:01.314200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.314235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.314274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.314309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.314337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.314372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.314396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.314445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.314469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.314504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.314527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.314561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.314585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.314618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.314641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.314674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.314703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.314737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.314760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.314808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.314845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.314898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.314924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.314971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.314996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.315031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.315056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.315116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.315141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.328060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.328126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.328183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.328208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.328242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.328265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.328298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.328321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.328354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.328377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.328410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.328433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.328466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.328489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.328522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.328561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.328596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.278 [2024-12-06 01:12:01.328636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.278 [2024-12-06 01:12:01.328671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.328695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.328729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.328753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.328788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.328839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.328892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.328918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.328970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.328995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.329030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.329054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.329089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.329129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.329178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.329206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.329239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.329261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.329293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.329316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.329348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.329371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.329403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.329426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.329458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.329481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.329514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.329536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.329569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.329592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.329624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.329652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.329685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.329708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.329740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.329763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.329810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.329846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.329882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.329906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.329940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.329965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.329998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.279 [2024-12-06 01:12:01.330023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.330058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.279 [2024-12-06 01:12:01.330083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.330133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.279 [2024-12-06 01:12:01.330155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.330188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.279 [2024-12-06 01:12:01.330210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.330243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.279 [2024-12-06 01:12:01.330266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.330298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.279 [2024-12-06 01:12:01.330320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.330352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.279 [2024-12-06 01:12:01.330380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.330414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.279 [2024-12-06 01:12:01.330436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.330468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.279 [2024-12-06 01:12:01.330491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.330523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.279 [2024-12-06 01:12:01.330546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.330578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.279 [2024-12-06 01:12:01.330617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.330651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.279 [2024-12-06 01:12:01.330675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.330709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.279 [2024-12-06 01:12:01.330733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.330768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.279 [2024-12-06 01:12:01.330807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.331823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.279 [2024-12-06 01:12:01.331866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.331914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.279 [2024-12-06 01:12:01.331940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.331977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.279 [2024-12-06 01:12:01.332002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.279 [2024-12-06 01:12:01.332038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.279 [2024-12-06 01:12:01.332063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.332109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.280 [2024-12-06 01:12:01.332154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.332205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.332229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.332261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.332306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.332347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.332371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.332405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.332429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.332464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.332488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.332523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.332547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.332597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.332620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.332653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.332676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.332708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.332731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.332763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.332786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.332846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.332881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.332922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.332947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.332989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.333015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.333052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.333077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.333124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.333162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.333195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.333218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.333250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.333274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.333306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.333329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.333362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.333384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.333418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.333441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.333473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.333496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.333528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.333551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.333583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.333647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.333683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.333706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.333745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.333768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.333825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.333851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.333906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.333931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.333971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.334000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.334037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.334063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.334109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.334134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.334169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.334195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.334230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.334269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.334302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.334325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.334358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.334381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.334413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.334436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.334469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.334492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.280 [2024-12-06 01:12:01.334530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.280 [2024-12-06 01:12:01.334554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.334611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.334637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.334672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.334696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.334729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.334753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.334786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.334847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.334898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.334923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.334958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.334983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.335019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.335043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.335078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.335117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.335158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.335181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.335214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.335237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.335269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.335293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.335326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.335353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.335387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.335410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.335470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.335511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.335548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.335573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.335609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.335641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.335679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.335704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.335741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.335766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.335835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.335879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.335916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.335941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.335977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.336002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.336038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.336063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.336099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.336145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.336196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.336224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.336991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.337023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.337065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.337090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.337150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.337175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.337225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.281 [2024-12-06 01:12:01.337263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.337298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.281 [2024-12-06 01:12:01.337321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.337354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.281 [2024-12-06 01:12:01.337378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.337410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.281 [2024-12-06 01:12:01.337433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.337466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.281 [2024-12-06 01:12:01.337489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.337522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.281 [2024-12-06 01:12:01.337545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.337578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.281 [2024-12-06 01:12:01.337601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.337633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.281 [2024-12-06 01:12:01.337657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.337689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.281 [2024-12-06 01:12:01.337712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.337750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.281 [2024-12-06 01:12:01.337773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.281 [2024-12-06 01:12:01.337835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.281 [2024-12-06 01:12:01.337862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.337897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.282 [2024-12-06 01:12:01.337921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.337956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.282 [2024-12-06 01:12:01.337981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.338016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.282 [2024-12-06 01:12:01.338041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.338075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.282 [2024-12-06 01:12:01.338115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.338174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.282 [2024-12-06 01:12:01.338198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.338230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.282 [2024-12-06 01:12:01.338253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.338286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.282 [2024-12-06 01:12:01.338310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.338344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.282 [2024-12-06 01:12:01.338367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.338399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.282 [2024-12-06 01:12:01.338423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.338457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.282 [2024-12-06 01:12:01.338480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.338518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.282 [2024-12-06 01:12:01.338542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.338576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.282 [2024-12-06 01:12:01.338611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.338647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.282 [2024-12-06 01:12:01.338670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.338703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.338727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.338761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.338784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.338850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.338876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.338911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.338937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.338972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.338997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.339032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.339057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.339114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.339139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.339188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.339213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.339245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.339269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.339307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.339331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.339363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.339386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.339420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.339443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.339482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.339506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.339539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.339562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.339596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.339619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.339652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.339675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.339708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.339732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.339764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.339787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.339847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.339874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.339911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.339937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.339994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.340025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.282 [2024-12-06 01:12:01.340063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.282 [2024-12-06 01:12:01.340095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.340154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.283 [2024-12-06 01:12:01.340179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.340214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.283 [2024-12-06 01:12:01.340239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.340291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.283 [2024-12-06 01:12:01.340330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.340365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.283 [2024-12-06 01:12:01.340388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.340420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.283 [2024-12-06 01:12:01.340443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.340476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.283 [2024-12-06 01:12:01.340499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.340531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.283 [2024-12-06 01:12:01.340579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.340633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.283 [2024-12-06 01:12:01.340659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.340696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.283 [2024-12-06 01:12:01.340721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.340756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.283 [2024-12-06 01:12:01.340781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.340829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.283 [2024-12-06 01:12:01.340855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.340892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.283 [2024-12-06 01:12:01.340938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.341327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.283 [2024-12-06 01:12:01.341374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.341464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.283 [2024-12-06 01:12:01.341499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.341558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.283 [2024-12-06 01:12:01.341583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.341637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.283 [2024-12-06 01:12:01.341661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.341699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.283 [2024-12-06 01:12:01.341723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.341761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.283 [2024-12-06 01:12:01.341785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.341854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.283 [2024-12-06 01:12:01.341895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.341934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.283 [2024-12-06 01:12:01.341959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.341997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.283 [2024-12-06 01:12:01.342022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.342060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.283 [2024-12-06 01:12:01.342085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.342123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.283 [2024-12-06 01:12:01.342148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.342205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.283 [2024-12-06 01:12:01.342228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.342269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.283 [2024-12-06 01:12:01.342293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.342329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.283 [2024-12-06 01:12:01.342353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.342389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.283 [2024-12-06 01:12:01.342412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.342449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.283 [2024-12-06 01:12:01.342472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.342508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.283 [2024-12-06 01:12:01.342532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.342567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.283 [2024-12-06 01:12:01.342591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.342627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.283 [2024-12-06 01:12:01.342650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.342686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.283 [2024-12-06 01:12:01.342709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.342744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.283 [2024-12-06 01:12:01.342768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.342828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.283 [2024-12-06 01:12:01.342855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.342893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.283 [2024-12-06 01:12:01.342917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.342955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.283 [2024-12-06 01:12:01.342980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.343024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.283 [2024-12-06 01:12:01.343049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.343088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.283 [2024-12-06 01:12:01.343127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.283 [2024-12-06 01:12:01.343164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.283 [2024-12-06 01:12:01.343187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.343224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.343247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.343283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.343317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.343356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.343379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.343415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.343439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.343476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.343499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.343536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.343559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.343595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.343618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.343655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.343678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.343715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.343738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.343779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.343825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.343869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.343894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.343934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.343959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.343999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.344025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.344064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.344089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.344149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.344188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.344226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.344249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.344285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.344308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.344345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.344368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.344405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.344428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.344464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.344487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.344523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.344547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.344583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.344611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.344648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.344671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.344708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.344732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.344768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.344791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.344864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.344890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.344929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.344954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.344992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.345016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.345055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.345080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.345135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.345159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.345218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.345241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.345284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.345307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.345344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.345367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.345404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.345431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.284 [2024-12-06 01:12:01.345469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.284 [2024-12-06 01:12:01.345492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:01.345529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.285 [2024-12-06 01:12:01.345553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:01.345590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.285 [2024-12-06 01:12:01.345613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:01.345650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.285 [2024-12-06 01:12:01.345674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:01.345711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.285 [2024-12-06 01:12:01.345735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:01.345983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.285 [2024-12-06 01:12:01.346012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.285 5916.88 IOPS, 23.11 MiB/s [2024-12-06T00:12:21.994Z] 5568.82 IOPS, 21.75 MiB/s [2024-12-06T00:12:21.994Z] 5259.44 IOPS, 20.54 MiB/s [2024-12-06T00:12:21.994Z] 4982.63 IOPS, 19.46 MiB/s [2024-12-06T00:12:21.994Z] 4974.00 IOPS, 19.43 MiB/s [2024-12-06T00:12:21.994Z] 5034.48 IOPS, 19.67 MiB/s [2024-12-06T00:12:21.994Z] 5082.05 IOPS, 19.85 MiB/s [2024-12-06T00:12:21.994Z] 5216.91 IOPS, 20.38 MiB/s [2024-12-06T00:12:21.994Z] 5349.00 IOPS, 20.89 MiB/s [2024-12-06T00:12:21.994Z] 5469.52 IOPS, 21.37 MiB/s [2024-12-06T00:12:21.994Z] 5520.54 IOPS, 21.56 MiB/s [2024-12-06T00:12:21.994Z] 5545.52 IOPS, 21.66 MiB/s [2024-12-06T00:12:21.994Z] 5567.96 IOPS, 21.75 MiB/s [2024-12-06T00:12:21.994Z] 5612.10 IOPS, 21.92 MiB/s [2024-12-06T00:12:21.994Z] 5702.93 IOPS, 22.28 MiB/s [2024-12-06T00:12:21.994Z] 5794.03 IOPS, 22.63 MiB/s [2024-12-06T00:12:21.994Z] [2024-12-06 01:12:18.027853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.285 [2024-12-06 01:12:18.027944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.028003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.285 [2024-12-06 01:12:18.028031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.028072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.285 [2024-12-06 01:12:18.028116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.028155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.285 [2024-12-06 01:12:18.028194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.028238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.285 [2024-12-06 01:12:18.028262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.028297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.285 [2024-12-06 01:12:18.028320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.028354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.285 [2024-12-06 01:12:18.028377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.028411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.285 [2024-12-06 01:12:18.028434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.028467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.285 [2024-12-06 01:12:18.028491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.028542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.285 [2024-12-06 01:12:18.028566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.028600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.285 [2024-12-06 01:12:18.028623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.028656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.285 [2024-12-06 01:12:18.028679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.028713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.285 [2024-12-06 01:12:18.028737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.028770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.285 [2024-12-06 01:12:18.028807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.028855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.285 [2024-12-06 01:12:18.028881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.028917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.285 [2024-12-06 01:12:18.028942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.028977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:52328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.285 [2024-12-06 01:12:18.029006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.029062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:52360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.285 [2024-12-06 01:12:18.029088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.029141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.285 [2024-12-06 01:12:18.029166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.029203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.285 [2024-12-06 01:12:18.029227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.029263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.285 [2024-12-06 01:12:18.029287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.029323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.285 [2024-12-06 01:12:18.029362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.029397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.285 [2024-12-06 01:12:18.029420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.029472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.285 [2024-12-06 01:12:18.029510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.029585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.285 [2024-12-06 01:12:18.029625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.029669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.285 [2024-12-06 01:12:18.029695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.029731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.285 [2024-12-06 01:12:18.029756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.029792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.285 [2024-12-06 01:12:18.029828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.029868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.285 [2024-12-06 01:12:18.029918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.029972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.285 [2024-12-06 01:12:18.029996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.285 [2024-12-06 01:12:18.030030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.285 [2024-12-06 01:12:18.030054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.030104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.286 [2024-12-06 01:12:18.030128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.030161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.030184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.030216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.030240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.030274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.030296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.030329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:52648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.030353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.030386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.030409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.030442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.030466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.030499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.286 [2024-12-06 01:12:18.030522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.030556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:52760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.030578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.030613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:52792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.030636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.031289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:52824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.031328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.031384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.031411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.031449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.031476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.031513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.031539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.031577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.031602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.031640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.286 [2024-12-06 01:12:18.031680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.031717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.286 [2024-12-06 01:12:18.031757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.031792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.286 [2024-12-06 01:12:18.031843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.031881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.286 [2024-12-06 01:12:18.031906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.031964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.032005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.032061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.032088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.032127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:52784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.032153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.032206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.032233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.032271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.032297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.032350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.032374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.032410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:52912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.032434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.032486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.032511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.032546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.032570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.032608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.286 [2024-12-06 01:12:18.032632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.032669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.286 [2024-12-06 01:12:18.032694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.032731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.286 [2024-12-06 01:12:18.032756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.032807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.032855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.032896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.286 [2024-12-06 01:12:18.032922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.286 [2024-12-06 01:12:18.034209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.286 [2024-12-06 01:12:18.034242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.034287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.287 [2024-12-06 01:12:18.034319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.034358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.287 [2024-12-06 01:12:18.034384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.034437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.287 [2024-12-06 01:12:18.034463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.034514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.287 [2024-12-06 01:12:18.034538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.034574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.287 [2024-12-06 01:12:18.034624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.034672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.287 [2024-12-06 01:12:18.034725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.034768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.287 [2024-12-06 01:12:18.034795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.034842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.034870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.034929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.034964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.035010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.035039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.035076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.287 [2024-12-06 01:12:18.035102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.035138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.035163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.035200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.035246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.035284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.035309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.035346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.287 [2024-12-06 01:12:18.035370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.035407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.035431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.035466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:52648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.035507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.035543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.035567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.035604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:52760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.035628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.036151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.036183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.036239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.036268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.036307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.036348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.036399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.036424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.036459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.036483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.036518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.036547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.036583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.036607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.036658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.036684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.036720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.036745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.036782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.036831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.036872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.036897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.036935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:52952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.036961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.036997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.287 [2024-12-06 01:12:18.037023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.037061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.287 [2024-12-06 01:12:18.037086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.037138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.037164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.037214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.037238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.037272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.037297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.037331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.287 [2024-12-06 01:12:18.037355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.037395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.287 [2024-12-06 01:12:18.037420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.287 [2024-12-06 01:12:18.037455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.288 [2024-12-06 01:12:18.037479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.037531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.288 [2024-12-06 01:12:18.037570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.038534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:53088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.288 [2024-12-06 01:12:18.038567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.038611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.288 [2024-12-06 01:12:18.038650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.038686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.288 [2024-12-06 01:12:18.038710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.038745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.288 [2024-12-06 01:12:18.038768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.038846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.288 [2024-12-06 01:12:18.038883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.038924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.288 [2024-12-06 01:12:18.038951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.038988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.288 [2024-12-06 01:12:18.039013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.039050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.288 [2024-12-06 01:12:18.039075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.039112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.288 [2024-12-06 01:12:18.039138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.039196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.288 [2024-12-06 01:12:18.039221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.042379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.288 [2024-12-06 01:12:18.042412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.042456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.288 [2024-12-06 01:12:18.042481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.042518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.288 [2024-12-06 01:12:18.042543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.042578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.288 [2024-12-06 01:12:18.042603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.042638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.288 [2024-12-06 01:12:18.042679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.042716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.288 [2024-12-06 01:12:18.042741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.042778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.288 [2024-12-06 01:12:18.042828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.042871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.288 [2024-12-06 01:12:18.042897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.042933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.288 [2024-12-06 01:12:18.042958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.042995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.288 [2024-12-06 01:12:18.043020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.043058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.288 [2024-12-06 01:12:18.043084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.043137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.288 [2024-12-06 01:12:18.043182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.043219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.288 [2024-12-06 01:12:18.043243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.043278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:53280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.288 [2024-12-06 01:12:18.043302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.043337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.288 [2024-12-06 01:12:18.043360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.043396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.288 [2024-12-06 01:12:18.043420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.043455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.288 [2024-12-06 01:12:18.043479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.045118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.288 [2024-12-06 01:12:18.045152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.045196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.288 [2024-12-06 01:12:18.045238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.045289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.288 [2024-12-06 01:12:18.045314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.045349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.288 [2024-12-06 01:12:18.045373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.045407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.288 [2024-12-06 01:12:18.045431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.045528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.288 [2024-12-06 01:12:18.045556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.045593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.288 [2024-12-06 01:12:18.045624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.045662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.288 [2024-12-06 01:12:18.045688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.045725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.288 [2024-12-06 01:12:18.045751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.288 [2024-12-06 01:12:18.045787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.288 [2024-12-06 01:12:18.045812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.045875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.045901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.045937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.045978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.046015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.046040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.046077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.046102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.046139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.046179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.046215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.046239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.046277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.046302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.047200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.047231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.047273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.047298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.047357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.047382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.047418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.047442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.047478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.289 [2024-12-06 01:12:18.047503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.047537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.289 [2024-12-06 01:12:18.047562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.047598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.289 [2024-12-06 01:12:18.047638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.047674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.289 [2024-12-06 01:12:18.047698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.047733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.289 [2024-12-06 01:12:18.047757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.047791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.289 [2024-12-06 01:12:18.047841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.047878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.289 [2024-12-06 01:12:18.047903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.047939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.289 [2024-12-06 01:12:18.047963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.047999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.048024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.048060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.289 [2024-12-06 01:12:18.048084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.048129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.048155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.048190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.048215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.048266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.289 [2024-12-06 01:12:18.048292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.048329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.048354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.048391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.048416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.048453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.048479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.048515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.048541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.050286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.289 [2024-12-06 01:12:18.050323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.050385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.289 [2024-12-06 01:12:18.050427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.050466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.050492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.050529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.050555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.050592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.050617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.050654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.050695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.050734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.050760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.050797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.050841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.050889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.289 [2024-12-06 01:12:18.050916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.050952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.289 [2024-12-06 01:12:18.050977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.289 [2024-12-06 01:12:18.051013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.051038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.051075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.290 [2024-12-06 01:12:18.051100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.051136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.290 [2024-12-06 01:12:18.051161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.051213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.051238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.051288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.051311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.051346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.290 [2024-12-06 01:12:18.051384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.051418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.290 [2024-12-06 01:12:18.051441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.051474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.290 [2024-12-06 01:12:18.051501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.051536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.290 [2024-12-06 01:12:18.051559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.051593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.290 [2024-12-06 01:12:18.051617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.051653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.051677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.051710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.051733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.051796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.051866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.054876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.054914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.054960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.054987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.055024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.055050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.055087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.055131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.055182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.055206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.055239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.055262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.055295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.055318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.055357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.055381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.055431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.055455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.055489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.290 [2024-12-06 01:12:18.055512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.055546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.055569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.055620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.055653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.055719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.055746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.055782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.290 [2024-12-06 01:12:18.055808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.055856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.290 [2024-12-06 01:12:18.055882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.055918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.055942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.055979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.290 [2024-12-06 01:12:18.056004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.056040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.290 [2024-12-06 01:12:18.056066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.056116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.290 [2024-12-06 01:12:18.056140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.056195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.056219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.056281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.290 [2024-12-06 01:12:18.056306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.056339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.056363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.056398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.056421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.056455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.056479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.056514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.290 [2024-12-06 01:12:18.056538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.056572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.290 [2024-12-06 01:12:18.056596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.290 [2024-12-06 01:12:18.056629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.291 [2024-12-06 01:12:18.056653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.056687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.291 [2024-12-06 01:12:18.056711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.056744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.291 [2024-12-06 01:12:18.056768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.056825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.291 [2024-12-06 01:12:18.056852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.056890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.291 [2024-12-06 01:12:18.056916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.056953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.291 [2024-12-06 01:12:18.056984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.060498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-12-06 01:12:18.060535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.060596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-12-06 01:12:18.060622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.060673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-12-06 01:12:18.060696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.060731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.291 [2024-12-06 01:12:18.060754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.060788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-12-06 01:12:18.060837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.060875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-12-06 01:12:18.060899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.060935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-12-06 01:12:18.060959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.060994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-12-06 01:12:18.061017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.061052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-12-06 01:12:18.061076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.061129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-12-06 01:12:18.061167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.061230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.291 [2024-12-06 01:12:18.061257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.061293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.291 [2024-12-06 01:12:18.061324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.061362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.291 [2024-12-06 01:12:18.061388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.061424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.291 [2024-12-06 01:12:18.061449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.061500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-12-06 01:12:18.061525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.061575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.291 [2024-12-06 01:12:18.061599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.061634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-12-06 01:12:18.061657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.061691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.291 [2024-12-06 01:12:18.061715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.061748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-12-06 01:12:18.061772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.061828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.291 [2024-12-06 01:12:18.061854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.061890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.291 [2024-12-06 01:12:18.061914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.061949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.291 [2024-12-06 01:12:18.061974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.062008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.291 [2024-12-06 01:12:18.062032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.062067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.291 [2024-12-06 01:12:18.062107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.062151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.291 [2024-12-06 01:12:18.062181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.062224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.291 [2024-12-06 01:12:18.062270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.062310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-12-06 01:12:18.062336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.062372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-12-06 01:12:18.062398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.291 [2024-12-06 01:12:18.062434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:53784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-12-06 01:12:18.062460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.062500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-12-06 01:12:18.062532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.062585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-12-06 01:12:18.062625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.062661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-12-06 01:12:18.062685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.062718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.292 [2024-12-06 01:12:18.062741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.062775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.292 [2024-12-06 01:12:18.062821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.062860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.292 [2024-12-06 01:12:18.062885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.062920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.292 [2024-12-06 01:12:18.062945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.062986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.292 [2024-12-06 01:12:18.063012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.063048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.292 [2024-12-06 01:12:18.063073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.063123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-12-06 01:12:18.063146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.063180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-12-06 01:12:18.063203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.063251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-12-06 01:12:18.063274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.063307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-12-06 01:12:18.063330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.065381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-12-06 01:12:18.065428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.065479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-12-06 01:12:18.065505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.065542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.292 [2024-12-06 01:12:18.065566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.065616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.292 [2024-12-06 01:12:18.065642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.065679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.292 [2024-12-06 01:12:18.065704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.065741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-12-06 01:12:18.065766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.065802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-12-06 01:12:18.065842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.065881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-12-06 01:12:18.065922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.065960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-12-06 01:12:18.065985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.066020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-12-06 01:12:18.066045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.066095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-12-06 01:12:18.066119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.066169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-12-06 01:12:18.066193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.066243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-12-06 01:12:18.066268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.066303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.292 [2024-12-06 01:12:18.066327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.066362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-12-06 01:12:18.066386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.066420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-12-06 01:12:18.066444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.066478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-12-06 01:12:18.066503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.066554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.292 [2024-12-06 01:12:18.066592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.066627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.292 [2024-12-06 01:12:18.066655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.066691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.292 [2024-12-06 01:12:18.066714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.068882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.292 [2024-12-06 01:12:18.068918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.068963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.292 [2024-12-06 01:12:18.068989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.069027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.292 [2024-12-06 01:12:18.069067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.069104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.292 [2024-12-06 01:12:18.069129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.069178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.292 [2024-12-06 01:12:18.069201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.292 [2024-12-06 01:12:18.069236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:53752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.069259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.069291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.069315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.069347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.069370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.069402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.293 [2024-12-06 01:12:18.069425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.069458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.293 [2024-12-06 01:12:18.069481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.069514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.293 [2024-12-06 01:12:18.069537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.069575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.069599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.069633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.069655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.069689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:53320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.069712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.069745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.069767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.069827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.069854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.069890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:54064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.069915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.069949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.293 [2024-12-06 01:12:18.069974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.070010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.293 [2024-12-06 01:12:18.070035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.070069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.293 [2024-12-06 01:12:18.070109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.070143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:54096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.070182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.070237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:54128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.070266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.070334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.070364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.070427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.070453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.070490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.070516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.070553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.070578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.070614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.070639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.070690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.293 [2024-12-06 01:12:18.070714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.070765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.070789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.070848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.070874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.070909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.070934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.070969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.070993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.071028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.293 [2024-12-06 01:12:18.071052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.071101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.071125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.071175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.293 [2024-12-06 01:12:18.071198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.071237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.293 [2024-12-06 01:12:18.071262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.073411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:54056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.073448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.073492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:54120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.073520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.073557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.293 [2024-12-06 01:12:18.073582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.073641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.293 [2024-12-06 01:12:18.073684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.073729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.293 [2024-12-06 01:12:18.073756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.073792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.293 [2024-12-06 01:12:18.073826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.073884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.293 [2024-12-06 01:12:18.073910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.073948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.293 [2024-12-06 01:12:18.073988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.293 [2024-12-06 01:12:18.074024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.294 [2024-12-06 01:12:18.074048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.074100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.294 [2024-12-06 01:12:18.074141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.074178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.294 [2024-12-06 01:12:18.074202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.074237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.294 [2024-12-06 01:12:18.074267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.074304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.294 [2024-12-06 01:12:18.074329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.074380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.294 [2024-12-06 01:12:18.074405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.074454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.294 [2024-12-06 01:12:18.074478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.074511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.294 [2024-12-06 01:12:18.074535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.074568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.294 [2024-12-06 01:12:18.074591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.074624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.294 [2024-12-06 01:12:18.074648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.074681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.294 [2024-12-06 01:12:18.074703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.074737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.294 [2024-12-06 01:12:18.074760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.074808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.294 [2024-12-06 01:12:18.074841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.074878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.294 [2024-12-06 01:12:18.074903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.074938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:54032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.294 [2024-12-06 01:12:18.074963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.075000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.294 [2024-12-06 01:12:18.075029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.077524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.294 [2024-12-06 01:12:18.077558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.077601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.294 [2024-12-06 01:12:18.077626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.077660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.294 [2024-12-06 01:12:18.077685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.077718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.294 [2024-12-06 01:12:18.077742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.077777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.294 [2024-12-06 01:12:18.077827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.077869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.294 [2024-12-06 01:12:18.077895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.077930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.294 [2024-12-06 01:12:18.077954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.077990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.294 [2024-12-06 01:12:18.078015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.078052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.294 [2024-12-06 01:12:18.078076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.078112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.294 [2024-12-06 01:12:18.078152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.078187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.294 [2024-12-06 01:12:18.078210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.078244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.294 [2024-12-06 01:12:18.078268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.078309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.294 [2024-12-06 01:12:18.078334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.078368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.294 [2024-12-06 01:12:18.078392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.078426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:54104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.294 [2024-12-06 01:12:18.078450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.078495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.294 [2024-12-06 01:12:18.078522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.078593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.294 [2024-12-06 01:12:18.078622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.078660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.294 [2024-12-06 01:12:18.078686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.078721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.294 [2024-12-06 01:12:18.078747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.078783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.294 [2024-12-06 01:12:18.078809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.078857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.294 [2024-12-06 01:12:18.078891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.078935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.294 [2024-12-06 01:12:18.078961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.078998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.294 [2024-12-06 01:12:18.079024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.294 [2024-12-06 01:12:18.079060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.079085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.079128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.079154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.079191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.079236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.079273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.295 [2024-12-06 01:12:18.079311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.079346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.079368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.079402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.295 [2024-12-06 01:12:18.079425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.079460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.079482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.083499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.083538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.083614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.083643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.083697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.083723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.083776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.083800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.083842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.083868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.083903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.083927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.083962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.083992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.084027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.084051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.084086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.084110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.084161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.084184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.084217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.084240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.084274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.084297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.084330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.084353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.084387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.084410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.084445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.295 [2024-12-06 01:12:18.084468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.084501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.295 [2024-12-06 01:12:18.084525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.084574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.295 [2024-12-06 01:12:18.084625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.084696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.295 [2024-12-06 01:12:18.084724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.084761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.295 [2024-12-06 01:12:18.084791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.084837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.084873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.084913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.295 [2024-12-06 01:12:18.084938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.084990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.295 [2024-12-06 01:12:18.085015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.085050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:54200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.295 [2024-12-06 01:12:18.085074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.085109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.295 [2024-12-06 01:12:18.085149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.085183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.085206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.085240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.085263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.085297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.085320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.085354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.295 [2024-12-06 01:12:18.085376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.295 [2024-12-06 01:12:18.085410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.296 [2024-12-06 01:12:18.085433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.296 [2024-12-06 01:12:18.085466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.296 [2024-12-06 01:12:18.085489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.296 [2024-12-06 01:12:18.085522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.296 [2024-12-06 01:12:18.085545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.296 [2024-12-06 01:12:18.085583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.296 [2024-12-06 01:12:18.085607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.296 [2024-12-06 01:12:18.085640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.296 [2024-12-06 01:12:18.085662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.296 [2024-12-06 01:12:18.085696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.296 [2024-12-06 01:12:18.085718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.296 [2024-12-06 01:12:18.085751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.296 [2024-12-06 01:12:18.085774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.296 [2024-12-06 01:12:18.085807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.296 [2024-12-06 01:12:18.085859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.296 [2024-12-06 01:12:18.085896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:54400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.296 [2024-12-06 01:12:18.085920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.296 [2024-12-06 01:12:18.085954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.296 [2024-12-06 01:12:18.085978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.296 [2024-12-06 01:12:18.086014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:54464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.296 [2024-12-06 01:12:18.086038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.296 [2024-12-06 01:12:18.086073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:54496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.296 [2024-12-06 01:12:18.086097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.296 [2024-12-06 01:12:18.086146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.296 [2024-12-06 01:12:18.086170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.296 [2024-12-06 01:12:18.086203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.296 [2024-12-06 01:12:18.086226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.296 [2024-12-06 01:12:18.086261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.296 [2024-12-06 01:12:18.086283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.296 5859.69 IOPS, 22.89 MiB/s [2024-12-06T00:12:22.005Z] 5869.76 IOPS, 22.93 MiB/s [2024-12-06T00:12:22.005Z] 5878.47 IOPS, 22.96 MiB/s [2024-12-06T00:12:22.005Z] Received shutdown signal, test time was about 34.687589 seconds 00:34:49.296 00:34:49.296 Latency(us) 00:34:49.296 [2024-12-06T00:12:22.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.296 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:49.296 Verification LBA range: start 0x0 length 0x4000 00:34:49.296 Nvme0n1 : 34.69 5874.81 22.95 0.00 0.00 21748.94 1298.58 4076242.11 00:34:49.296 [2024-12-06T00:12:22.005Z] =================================================================================================================== 00:34:49.296 [2024-12-06T00:12:22.005Z] Total : 5874.81 22.95 0.00 0.00 21748.94 1298.58 4076242.11 00:34:49.296 01:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:49.296 01:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:49.296 01:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:49.296 01:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:49.553 01:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:49.553 01:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:34:49.553 01:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:49.553 01:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:34:49.553 01:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:49.553 01:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:49.553 rmmod nvme_tcp 00:34:49.553 rmmod nvme_fabrics 00:34:49.553 rmmod nvme_keyring 00:34:49.553 01:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:49.553 01:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:34:49.553 01:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:34:49.553 01:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3104340 ']' 00:34:49.554 01:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3104340 00:34:49.554 01:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3104340 ']' 00:34:49.554 01:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3104340 00:34:49.554 01:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:49.554 01:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:49.554 01:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3104340 00:34:49.554 01:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:49.554 01:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:49.554 01:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3104340' 00:34:49.554 killing process with pid 3104340 00:34:49.554 01:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3104340 00:34:49.554 01:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3104340 00:34:50.927 01:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:50.927 01:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:50.927 01:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:50.927 01:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:34:50.927 01:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:34:50.927 01:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:50.927 01:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:34:50.927 01:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:50.927 01:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:50.927 01:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:50.927 01:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:50.927 01:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:52.827 01:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:52.827 00:34:52.827 real 0m46.997s 00:34:52.827 user 2m21.404s 00:34:52.827 sys 0m10.637s 00:34:52.827 01:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:52.828 01:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:52.828 ************************************ 00:34:52.828 END TEST nvmf_host_multipath_status 00:34:52.828 ************************************ 00:34:52.828 01:12:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:52.828 01:12:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:52.828 01:12:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:52.828 01:12:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.828 ************************************ 00:34:52.828 START TEST nvmf_discovery_remove_ifc 00:34:52.828 ************************************ 00:34:52.828 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:53.086 * Looking for test storage... 00:34:53.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:34:53.086 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:53.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.087 --rc genhtml_branch_coverage=1 00:34:53.087 --rc genhtml_function_coverage=1 00:34:53.087 --rc genhtml_legend=1 00:34:53.087 --rc geninfo_all_blocks=1 00:34:53.087 --rc geninfo_unexecuted_blocks=1 00:34:53.087 00:34:53.087 ' 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:53.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.087 --rc genhtml_branch_coverage=1 00:34:53.087 --rc genhtml_function_coverage=1 00:34:53.087 --rc genhtml_legend=1 00:34:53.087 --rc geninfo_all_blocks=1 00:34:53.087 --rc geninfo_unexecuted_blocks=1 00:34:53.087 00:34:53.087 ' 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:53.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.087 --rc genhtml_branch_coverage=1 00:34:53.087 --rc genhtml_function_coverage=1 00:34:53.087 --rc genhtml_legend=1 00:34:53.087 --rc geninfo_all_blocks=1 00:34:53.087 --rc geninfo_unexecuted_blocks=1 00:34:53.087 00:34:53.087 ' 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:53.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.087 --rc genhtml_branch_coverage=1 00:34:53.087 --rc genhtml_function_coverage=1 00:34:53.087 --rc genhtml_legend=1 00:34:53.087 --rc geninfo_all_blocks=1 00:34:53.087 --rc geninfo_unexecuted_blocks=1 00:34:53.087 00:34:53.087 ' 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:53.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:34:53.087 01:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:54.991 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:54.991 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:54.991 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:54.991 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:54.991 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:54.992 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:54.992 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:54.992 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:34:54.992 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:54.992 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:54.992 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:54.992 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:54.992 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:54.992 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:54.992 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:54.992 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:54.992 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:54.992 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:54.992 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:54.992 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:54.992 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:54.992 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:54.992 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:54.992 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:54.992 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:54.992 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:55.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:55.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:34:55.251 00:34:55.251 --- 10.0.0.2 ping statistics --- 00:34:55.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:55.251 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:55.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:55.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:34:55.251 00:34:55.251 --- 10.0.0.1 ping statistics --- 00:34:55.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:55.251 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3111970 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3111970 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3111970 ']' 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:55.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:55.251 01:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:55.251 [2024-12-06 01:12:27.939357] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:34:55.251 [2024-12-06 01:12:27.939505] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:55.510 [2024-12-06 01:12:28.084362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.510 [2024-12-06 01:12:28.214893] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:55.510 [2024-12-06 01:12:28.214962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:55.510 [2024-12-06 01:12:28.214985] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:55.510 [2024-12-06 01:12:28.215007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:55.510 [2024-12-06 01:12:28.215025] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:55.510 [2024-12-06 01:12:28.216734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.446 01:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:56.446 01:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:56.446 01:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:56.446 01:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:56.446 01:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:56.446 01:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:56.446 01:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:56.446 01:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.446 01:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:56.446 [2024-12-06 01:12:28.950852] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.446 [2024-12-06 01:12:28.959171] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:56.446 null0 00:34:56.446 [2024-12-06 01:12:28.991060] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:56.446 01:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.446 01:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3112119 00:34:56.446 01:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:56.446 01:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3112119 /tmp/host.sock 00:34:56.446 01:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3112119 ']' 00:34:56.447 01:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:56.447 01:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:56.447 01:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:56.447 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:56.447 01:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:56.447 01:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:56.447 [2024-12-06 01:12:29.101459] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:34:56.447 [2024-12-06 01:12:29.101617] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3112119 ] 00:34:56.705 [2024-12-06 01:12:29.245646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.705 [2024-12-06 01:12:29.383461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:57.642 01:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:57.642 01:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:57.642 01:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:57.642 01:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:57.642 01:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.642 01:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:57.642 01:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.642 01:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:57.642 01:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.642 01:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:57.900 01:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.901 01:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:57.901 01:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.901 01:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.276 [2024-12-06 01:12:31.544004] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:59.276 [2024-12-06 01:12:31.544043] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:59.276 [2024-12-06 01:12:31.544082] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:59.276 [2024-12-06 01:12:31.631431] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:59.276 [2024-12-06 01:12:31.731590] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:59.276 [2024-12-06 01:12:31.733218] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2c80:1 started. 00:34:59.276 [2024-12-06 01:12:31.735527] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:59.276 [2024-12-06 01:12:31.735615] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:59.276 [2024-12-06 01:12:31.735721] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:59.276 [2024-12-06 01:12:31.735763] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:59.276 [2024-12-06 01:12:31.735811] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:59.276 [2024-12-06 01:12:31.741720] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2c80 was disconnected and freed. delete nvme_qpair. 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:59.276 01:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:00.211 01:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:00.211 01:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:00.211 01:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:00.211 01:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.211 01:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:00.211 01:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:00.211 01:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:00.211 01:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.211 01:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:00.211 01:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:01.585 01:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:01.585 01:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:01.585 01:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:01.585 01:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.585 01:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:01.585 01:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:01.585 01:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:01.585 01:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.585 01:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:01.585 01:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:02.520 01:12:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:02.520 01:12:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:02.520 01:12:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:02.520 01:12:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.520 01:12:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:02.520 01:12:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.520 01:12:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:02.520 01:12:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.520 01:12:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:02.520 01:12:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:03.452 01:12:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:03.452 01:12:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:03.452 01:12:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:03.452 01:12:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.452 01:12:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:03.452 01:12:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:03.452 01:12:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:03.452 01:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.452 01:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:03.452 01:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:04.386 01:12:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:04.386 01:12:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:04.386 01:12:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:04.386 01:12:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.386 01:12:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:04.386 01:12:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:04.386 01:12:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:04.386 01:12:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.386 01:12:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:04.386 01:12:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:04.648 [2024-12-06 01:12:37.177239] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:04.648 [2024-12-06 01:12:37.177345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:04.648 [2024-12-06 01:12:37.177393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.648 [2024-12-06 01:12:37.177439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:04.648 [2024-12-06 01:12:37.177479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.648 [2024-12-06 01:12:37.177520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:04.648 [2024-12-06 01:12:37.177559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.648 [2024-12-06 01:12:37.177600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:04.648 [2024-12-06 01:12:37.177639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.648 [2024-12-06 01:12:37.177678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:04.648 [2024-12-06 01:12:37.177717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.649 [2024-12-06 01:12:37.177754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:35:04.649 [2024-12-06 01:12:37.187249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:35:04.649 [2024-12-06 01:12:37.197296] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:04.649 [2024-12-06 01:12:37.197347] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:04.649 [2024-12-06 01:12:37.197369] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:04.649 [2024-12-06 01:12:37.197385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:04.649 [2024-12-06 01:12:37.197467] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:05.582 01:12:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:05.582 01:12:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:05.582 01:12:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.582 01:12:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:05.582 01:12:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:05.582 01:12:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:05.582 01:12:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:05.582 [2024-12-06 01:12:38.212843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:05.582 [2024-12-06 01:12:38.212915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:05.582 [2024-12-06 01:12:38.212963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:35:05.582 [2024-12-06 01:12:38.213033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:35:05.582 [2024-12-06 01:12:38.213840] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:35:05.582 [2024-12-06 01:12:38.213931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:05.582 [2024-12-06 01:12:38.213976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:05.582 [2024-12-06 01:12:38.214015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:05.582 [2024-12-06 01:12:38.214048] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:05.582 [2024-12-06 01:12:38.214078] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:05.582 [2024-12-06 01:12:38.214120] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:05.582 [2024-12-06 01:12:38.214154] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:05.582 [2024-12-06 01:12:38.214193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:05.582 01:12:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.582 01:12:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:05.582 01:12:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:06.624 [2024-12-06 01:12:39.216745] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:06.624 [2024-12-06 01:12:39.216793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:06.624 [2024-12-06 01:12:39.216833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:06.624 [2024-12-06 01:12:39.216896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:06.624 [2024-12-06 01:12:39.216945] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:35:06.624 [2024-12-06 01:12:39.216989] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:06.624 [2024-12-06 01:12:39.217016] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:06.624 [2024-12-06 01:12:39.217039] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:06.624 [2024-12-06 01:12:39.217133] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:06.624 [2024-12-06 01:12:39.217214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:06.624 [2024-12-06 01:12:39.217262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:06.624 [2024-12-06 01:12:39.217309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:06.624 [2024-12-06 01:12:39.217350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:06.624 [2024-12-06 01:12:39.217389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:06.624 [2024-12-06 01:12:39.217430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:06.624 [2024-12-06 01:12:39.217470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:06.624 [2024-12-06 01:12:39.217509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:06.624 [2024-12-06 01:12:39.217551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:06.624 [2024-12-06 01:12:39.217589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:06.624 [2024-12-06 01:12:39.217628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:35:06.624 [2024-12-06 01:12:39.217741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:35:06.624 [2024-12-06 01:12:39.218713] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:06.624 [2024-12-06 01:12:39.218753] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:35:06.624 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:06.624 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:06.624 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.624 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:06.624 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:06.624 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:06.624 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:06.624 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.624 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:06.624 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:06.624 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:06.624 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:06.624 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:06.624 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:06.624 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.624 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:06.624 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:06.624 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:06.625 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:06.625 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.881 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:06.881 01:12:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:07.811 01:12:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:07.811 01:12:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:07.811 01:12:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:07.811 01:12:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.811 01:12:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:07.811 01:12:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:07.811 01:12:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:07.811 01:12:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.811 01:12:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:07.811 01:12:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:08.741 [2024-12-06 01:12:41.269672] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:08.741 [2024-12-06 01:12:41.269713] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:08.741 [2024-12-06 01:12:41.269760] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:08.741 01:12:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:08.741 01:12:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:08.741 01:12:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.741 01:12:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:08.741 01:12:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:08.741 01:12:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:08.741 01:12:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:08.741 [2024-12-06 01:12:41.397219] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:08.741 01:12:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.741 01:12:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:08.741 01:12:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:08.998 [2024-12-06 01:12:41.498446] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:35:08.998 [2024-12-06 01:12:41.499873] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x6150001f3900:1 started. 00:35:08.998 [2024-12-06 01:12:41.502125] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:08.998 [2024-12-06 01:12:41.502198] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:08.998 [2024-12-06 01:12:41.502280] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:08.998 [2024-12-06 01:12:41.502319] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:08.998 [2024-12-06 01:12:41.502345] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:08.998 [2024-12-06 01:12:41.508591] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x6150001f3900 was disconnected and freed. delete nvme_qpair. 00:35:09.930 01:12:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:09.930 01:12:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:09.930 01:12:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.930 01:12:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:09.930 01:12:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:09.930 01:12:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:09.930 01:12:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:09.930 01:12:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.930 01:12:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:09.930 01:12:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:09.930 01:12:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3112119 00:35:09.930 01:12:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3112119 ']' 00:35:09.930 01:12:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3112119 00:35:09.930 01:12:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:09.930 01:12:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:09.930 01:12:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3112119 00:35:09.930 01:12:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:09.930 01:12:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:09.930 01:12:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3112119' 00:35:09.930 killing process with pid 3112119 00:35:09.930 01:12:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3112119 00:35:09.930 01:12:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3112119 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:10.864 rmmod nvme_tcp 00:35:10.864 rmmod nvme_fabrics 00:35:10.864 rmmod nvme_keyring 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3111970 ']' 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3111970 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3111970 ']' 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3111970 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3111970 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3111970' 00:35:10.864 killing process with pid 3111970 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3111970 00:35:10.864 01:12:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3111970 00:35:12.239 01:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:12.239 01:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:12.239 01:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:12.239 01:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:35:12.240 01:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:35:12.240 01:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:12.240 01:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:35:12.240 01:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:12.240 01:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:12.240 01:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:12.240 01:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:12.240 01:12:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:14.159 01:12:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:14.159 00:35:14.159 real 0m21.117s 00:35:14.159 user 0m31.000s 00:35:14.159 sys 0m3.288s 00:35:14.159 01:12:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:14.159 01:12:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:14.159 ************************************ 00:35:14.159 END TEST nvmf_discovery_remove_ifc 00:35:14.159 ************************************ 00:35:14.159 01:12:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:14.159 01:12:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:14.159 01:12:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:14.159 01:12:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.159 ************************************ 00:35:14.159 START TEST nvmf_identify_kernel_target 00:35:14.159 ************************************ 00:35:14.159 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:14.159 * Looking for test storage... 00:35:14.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:14.159 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:14.159 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:14.159 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:35:14.159 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:14.159 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:14.159 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:14.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.160 --rc genhtml_branch_coverage=1 00:35:14.160 --rc genhtml_function_coverage=1 00:35:14.160 --rc genhtml_legend=1 00:35:14.160 --rc geninfo_all_blocks=1 00:35:14.160 --rc geninfo_unexecuted_blocks=1 00:35:14.160 00:35:14.160 ' 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:14.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.160 --rc genhtml_branch_coverage=1 00:35:14.160 --rc genhtml_function_coverage=1 00:35:14.160 --rc genhtml_legend=1 00:35:14.160 --rc geninfo_all_blocks=1 00:35:14.160 --rc geninfo_unexecuted_blocks=1 00:35:14.160 00:35:14.160 ' 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:14.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.160 --rc genhtml_branch_coverage=1 00:35:14.160 --rc genhtml_function_coverage=1 00:35:14.160 --rc genhtml_legend=1 00:35:14.160 --rc geninfo_all_blocks=1 00:35:14.160 --rc geninfo_unexecuted_blocks=1 00:35:14.160 00:35:14.160 ' 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:14.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.160 --rc genhtml_branch_coverage=1 00:35:14.160 --rc genhtml_function_coverage=1 00:35:14.160 --rc genhtml_legend=1 00:35:14.160 --rc geninfo_all_blocks=1 00:35:14.160 --rc geninfo_unexecuted_blocks=1 00:35:14.160 00:35:14.160 ' 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.160 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:14.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:14.161 01:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:16.064 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:16.064 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:16.064 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:16.064 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:16.064 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:16.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:16.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:35:16.324 00:35:16.324 --- 10.0.0.2 ping statistics --- 00:35:16.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.324 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:16.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:16.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:35:16.324 00:35:16.324 --- 10.0.0.1 ping statistics --- 00:35:16.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.324 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:16.324 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:16.325 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:16.325 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:16.325 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:16.325 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:16.325 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:16.325 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:16.325 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:16.325 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:16.325 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:35:16.325 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:16.325 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:16.325 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:16.325 01:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:17.699 Waiting for block devices as requested 00:35:17.699 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:17.699 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:17.699 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:17.699 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:17.958 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:17.958 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:17.958 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:17.958 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:17.958 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:18.217 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:18.217 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:18.217 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:18.476 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:18.476 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:18.476 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:18.476 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:18.736 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:18.736 No valid GPT data, bailing 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:35:18.736 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:18.995 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:18.995 00:35:18.995 Discovery Log Number of Records 2, Generation counter 2 00:35:18.995 =====Discovery Log Entry 0====== 00:35:18.995 trtype: tcp 00:35:18.995 adrfam: ipv4 00:35:18.995 subtype: current discovery subsystem 00:35:18.995 treq: not specified, sq flow control disable supported 00:35:18.995 portid: 1 00:35:18.995 trsvcid: 4420 00:35:18.995 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:18.995 traddr: 10.0.0.1 00:35:18.995 eflags: none 00:35:18.995 sectype: none 00:35:18.995 =====Discovery Log Entry 1====== 00:35:18.995 trtype: tcp 00:35:18.995 adrfam: ipv4 00:35:18.995 subtype: nvme subsystem 00:35:18.995 treq: not specified, sq flow control disable supported 00:35:18.995 portid: 1 00:35:18.995 trsvcid: 4420 00:35:18.995 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:18.995 traddr: 10.0.0.1 00:35:18.995 eflags: none 00:35:18.995 sectype: none 00:35:18.995 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:18.995 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:18.995 ===================================================== 00:35:18.995 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:18.995 ===================================================== 00:35:18.995 Controller Capabilities/Features 00:35:18.995 ================================ 00:35:18.995 Vendor ID: 0000 00:35:18.995 Subsystem Vendor ID: 0000 00:35:18.995 Serial Number: d68199caf19c903baa13 00:35:18.995 Model Number: Linux 00:35:18.996 Firmware Version: 6.8.9-20 00:35:18.996 Recommended Arb Burst: 0 00:35:18.996 IEEE OUI Identifier: 00 00 00 00:35:18.996 Multi-path I/O 00:35:18.996 May have multiple subsystem ports: No 00:35:18.996 May have multiple controllers: No 00:35:18.996 Associated with SR-IOV VF: No 00:35:18.996 Max Data Transfer Size: Unlimited 00:35:18.996 Max Number of Namespaces: 0 00:35:18.996 Max Number of I/O Queues: 1024 00:35:18.996 NVMe Specification Version (VS): 1.3 00:35:18.996 NVMe Specification Version (Identify): 1.3 00:35:18.996 Maximum Queue Entries: 1024 00:35:18.996 Contiguous Queues Required: No 00:35:18.996 Arbitration Mechanisms Supported 00:35:18.996 Weighted Round Robin: Not Supported 00:35:18.996 Vendor Specific: Not Supported 00:35:18.996 Reset Timeout: 7500 ms 00:35:18.996 Doorbell Stride: 4 bytes 00:35:18.996 NVM Subsystem Reset: Not Supported 00:35:18.996 Command Sets Supported 00:35:18.996 NVM Command Set: Supported 00:35:18.996 Boot Partition: Not Supported 00:35:18.996 Memory Page Size Minimum: 4096 bytes 00:35:18.996 Memory Page Size Maximum: 4096 bytes 00:35:18.996 Persistent Memory Region: Not Supported 00:35:18.996 Optional Asynchronous Events Supported 00:35:18.996 Namespace Attribute Notices: Not Supported 00:35:18.996 Firmware Activation Notices: Not Supported 00:35:18.996 ANA Change Notices: Not Supported 00:35:18.996 PLE Aggregate Log Change Notices: Not Supported 00:35:18.996 LBA Status Info Alert Notices: Not Supported 00:35:18.996 EGE Aggregate Log Change Notices: Not Supported 00:35:18.996 Normal NVM Subsystem Shutdown event: Not Supported 00:35:18.996 Zone Descriptor Change Notices: Not Supported 00:35:18.996 Discovery Log Change Notices: Supported 00:35:18.996 Controller Attributes 00:35:18.996 128-bit Host Identifier: Not Supported 00:35:18.996 Non-Operational Permissive Mode: Not Supported 00:35:18.996 NVM Sets: Not Supported 00:35:18.996 Read Recovery Levels: Not Supported 00:35:18.996 Endurance Groups: Not Supported 00:35:18.996 Predictable Latency Mode: Not Supported 00:35:18.996 Traffic Based Keep ALive: Not Supported 00:35:18.996 Namespace Granularity: Not Supported 00:35:18.996 SQ Associations: Not Supported 00:35:18.996 UUID List: Not Supported 00:35:18.996 Multi-Domain Subsystem: Not Supported 00:35:18.996 Fixed Capacity Management: Not Supported 00:35:18.996 Variable Capacity Management: Not Supported 00:35:18.996 Delete Endurance Group: Not Supported 00:35:18.996 Delete NVM Set: Not Supported 00:35:18.996 Extended LBA Formats Supported: Not Supported 00:35:18.996 Flexible Data Placement Supported: Not Supported 00:35:18.996 00:35:18.996 Controller Memory Buffer Support 00:35:18.996 ================================ 00:35:18.996 Supported: No 00:35:18.996 00:35:18.996 Persistent Memory Region Support 00:35:18.996 ================================ 00:35:18.996 Supported: No 00:35:18.996 00:35:18.996 Admin Command Set Attributes 00:35:18.996 ============================ 00:35:18.996 Security Send/Receive: Not Supported 00:35:18.996 Format NVM: Not Supported 00:35:18.996 Firmware Activate/Download: Not Supported 00:35:18.996 Namespace Management: Not Supported 00:35:18.996 Device Self-Test: Not Supported 00:35:18.996 Directives: Not Supported 00:35:18.996 NVMe-MI: Not Supported 00:35:18.996 Virtualization Management: Not Supported 00:35:18.996 Doorbell Buffer Config: Not Supported 00:35:18.996 Get LBA Status Capability: Not Supported 00:35:18.996 Command & Feature Lockdown Capability: Not Supported 00:35:18.996 Abort Command Limit: 1 00:35:18.996 Async Event Request Limit: 1 00:35:18.996 Number of Firmware Slots: N/A 00:35:18.996 Firmware Slot 1 Read-Only: N/A 00:35:19.256 Firmware Activation Without Reset: N/A 00:35:19.256 Multiple Update Detection Support: N/A 00:35:19.256 Firmware Update Granularity: No Information Provided 00:35:19.256 Per-Namespace SMART Log: No 00:35:19.256 Asymmetric Namespace Access Log Page: Not Supported 00:35:19.256 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:19.256 Command Effects Log Page: Not Supported 00:35:19.256 Get Log Page Extended Data: Supported 00:35:19.256 Telemetry Log Pages: Not Supported 00:35:19.256 Persistent Event Log Pages: Not Supported 00:35:19.256 Supported Log Pages Log Page: May Support 00:35:19.256 Commands Supported & Effects Log Page: Not Supported 00:35:19.256 Feature Identifiers & Effects Log Page:May Support 00:35:19.256 NVMe-MI Commands & Effects Log Page: May Support 00:35:19.256 Data Area 4 for Telemetry Log: Not Supported 00:35:19.256 Error Log Page Entries Supported: 1 00:35:19.256 Keep Alive: Not Supported 00:35:19.256 00:35:19.256 NVM Command Set Attributes 00:35:19.256 ========================== 00:35:19.256 Submission Queue Entry Size 00:35:19.256 Max: 1 00:35:19.256 Min: 1 00:35:19.256 Completion Queue Entry Size 00:35:19.256 Max: 1 00:35:19.256 Min: 1 00:35:19.256 Number of Namespaces: 0 00:35:19.256 Compare Command: Not Supported 00:35:19.256 Write Uncorrectable Command: Not Supported 00:35:19.256 Dataset Management Command: Not Supported 00:35:19.256 Write Zeroes Command: Not Supported 00:35:19.256 Set Features Save Field: Not Supported 00:35:19.256 Reservations: Not Supported 00:35:19.256 Timestamp: Not Supported 00:35:19.256 Copy: Not Supported 00:35:19.256 Volatile Write Cache: Not Present 00:35:19.256 Atomic Write Unit (Normal): 1 00:35:19.256 Atomic Write Unit (PFail): 1 00:35:19.256 Atomic Compare & Write Unit: 1 00:35:19.256 Fused Compare & Write: Not Supported 00:35:19.256 Scatter-Gather List 00:35:19.256 SGL Command Set: Supported 00:35:19.256 SGL Keyed: Not Supported 00:35:19.256 SGL Bit Bucket Descriptor: Not Supported 00:35:19.256 SGL Metadata Pointer: Not Supported 00:35:19.256 Oversized SGL: Not Supported 00:35:19.256 SGL Metadata Address: Not Supported 00:35:19.256 SGL Offset: Supported 00:35:19.256 Transport SGL Data Block: Not Supported 00:35:19.256 Replay Protected Memory Block: Not Supported 00:35:19.256 00:35:19.256 Firmware Slot Information 00:35:19.256 ========================= 00:35:19.256 Active slot: 0 00:35:19.256 00:35:19.256 00:35:19.256 Error Log 00:35:19.256 ========= 00:35:19.256 00:35:19.256 Active Namespaces 00:35:19.256 ================= 00:35:19.256 Discovery Log Page 00:35:19.256 ================== 00:35:19.256 Generation Counter: 2 00:35:19.256 Number of Records: 2 00:35:19.256 Record Format: 0 00:35:19.256 00:35:19.256 Discovery Log Entry 0 00:35:19.256 ---------------------- 00:35:19.256 Transport Type: 3 (TCP) 00:35:19.256 Address Family: 1 (IPv4) 00:35:19.256 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:19.256 Entry Flags: 00:35:19.256 Duplicate Returned Information: 0 00:35:19.256 Explicit Persistent Connection Support for Discovery: 0 00:35:19.256 Transport Requirements: 00:35:19.256 Secure Channel: Not Specified 00:35:19.256 Port ID: 1 (0x0001) 00:35:19.256 Controller ID: 65535 (0xffff) 00:35:19.256 Admin Max SQ Size: 32 00:35:19.256 Transport Service Identifier: 4420 00:35:19.256 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:19.256 Transport Address: 10.0.0.1 00:35:19.256 Discovery Log Entry 1 00:35:19.256 ---------------------- 00:35:19.256 Transport Type: 3 (TCP) 00:35:19.256 Address Family: 1 (IPv4) 00:35:19.256 Subsystem Type: 2 (NVM Subsystem) 00:35:19.256 Entry Flags: 00:35:19.256 Duplicate Returned Information: 0 00:35:19.256 Explicit Persistent Connection Support for Discovery: 0 00:35:19.256 Transport Requirements: 00:35:19.256 Secure Channel: Not Specified 00:35:19.256 Port ID: 1 (0x0001) 00:35:19.256 Controller ID: 65535 (0xffff) 00:35:19.256 Admin Max SQ Size: 32 00:35:19.256 Transport Service Identifier: 4420 00:35:19.256 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:19.256 Transport Address: 10.0.0.1 00:35:19.256 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:19.256 get_feature(0x01) failed 00:35:19.256 get_feature(0x02) failed 00:35:19.256 get_feature(0x04) failed 00:35:19.256 ===================================================== 00:35:19.256 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:19.256 ===================================================== 00:35:19.256 Controller Capabilities/Features 00:35:19.256 ================================ 00:35:19.256 Vendor ID: 0000 00:35:19.256 Subsystem Vendor ID: 0000 00:35:19.256 Serial Number: d64cb3fc4554cc767182 00:35:19.256 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:19.256 Firmware Version: 6.8.9-20 00:35:19.256 Recommended Arb Burst: 6 00:35:19.256 IEEE OUI Identifier: 00 00 00 00:35:19.256 Multi-path I/O 00:35:19.256 May have multiple subsystem ports: Yes 00:35:19.256 May have multiple controllers: Yes 00:35:19.256 Associated with SR-IOV VF: No 00:35:19.256 Max Data Transfer Size: Unlimited 00:35:19.256 Max Number of Namespaces: 1024 00:35:19.256 Max Number of I/O Queues: 128 00:35:19.256 NVMe Specification Version (VS): 1.3 00:35:19.256 NVMe Specification Version (Identify): 1.3 00:35:19.256 Maximum Queue Entries: 1024 00:35:19.256 Contiguous Queues Required: No 00:35:19.256 Arbitration Mechanisms Supported 00:35:19.256 Weighted Round Robin: Not Supported 00:35:19.256 Vendor Specific: Not Supported 00:35:19.256 Reset Timeout: 7500 ms 00:35:19.256 Doorbell Stride: 4 bytes 00:35:19.256 NVM Subsystem Reset: Not Supported 00:35:19.256 Command Sets Supported 00:35:19.256 NVM Command Set: Supported 00:35:19.256 Boot Partition: Not Supported 00:35:19.256 Memory Page Size Minimum: 4096 bytes 00:35:19.256 Memory Page Size Maximum: 4096 bytes 00:35:19.256 Persistent Memory Region: Not Supported 00:35:19.256 Optional Asynchronous Events Supported 00:35:19.256 Namespace Attribute Notices: Supported 00:35:19.256 Firmware Activation Notices: Not Supported 00:35:19.256 ANA Change Notices: Supported 00:35:19.256 PLE Aggregate Log Change Notices: Not Supported 00:35:19.256 LBA Status Info Alert Notices: Not Supported 00:35:19.256 EGE Aggregate Log Change Notices: Not Supported 00:35:19.256 Normal NVM Subsystem Shutdown event: Not Supported 00:35:19.256 Zone Descriptor Change Notices: Not Supported 00:35:19.256 Discovery Log Change Notices: Not Supported 00:35:19.256 Controller Attributes 00:35:19.256 128-bit Host Identifier: Supported 00:35:19.256 Non-Operational Permissive Mode: Not Supported 00:35:19.256 NVM Sets: Not Supported 00:35:19.256 Read Recovery Levels: Not Supported 00:35:19.256 Endurance Groups: Not Supported 00:35:19.256 Predictable Latency Mode: Not Supported 00:35:19.256 Traffic Based Keep ALive: Supported 00:35:19.256 Namespace Granularity: Not Supported 00:35:19.256 SQ Associations: Not Supported 00:35:19.256 UUID List: Not Supported 00:35:19.256 Multi-Domain Subsystem: Not Supported 00:35:19.256 Fixed Capacity Management: Not Supported 00:35:19.256 Variable Capacity Management: Not Supported 00:35:19.256 Delete Endurance Group: Not Supported 00:35:19.256 Delete NVM Set: Not Supported 00:35:19.256 Extended LBA Formats Supported: Not Supported 00:35:19.256 Flexible Data Placement Supported: Not Supported 00:35:19.256 00:35:19.256 Controller Memory Buffer Support 00:35:19.256 ================================ 00:35:19.256 Supported: No 00:35:19.256 00:35:19.256 Persistent Memory Region Support 00:35:19.256 ================================ 00:35:19.256 Supported: No 00:35:19.256 00:35:19.256 Admin Command Set Attributes 00:35:19.256 ============================ 00:35:19.256 Security Send/Receive: Not Supported 00:35:19.256 Format NVM: Not Supported 00:35:19.256 Firmware Activate/Download: Not Supported 00:35:19.256 Namespace Management: Not Supported 00:35:19.256 Device Self-Test: Not Supported 00:35:19.256 Directives: Not Supported 00:35:19.256 NVMe-MI: Not Supported 00:35:19.257 Virtualization Management: Not Supported 00:35:19.257 Doorbell Buffer Config: Not Supported 00:35:19.257 Get LBA Status Capability: Not Supported 00:35:19.257 Command & Feature Lockdown Capability: Not Supported 00:35:19.257 Abort Command Limit: 4 00:35:19.257 Async Event Request Limit: 4 00:35:19.257 Number of Firmware Slots: N/A 00:35:19.257 Firmware Slot 1 Read-Only: N/A 00:35:19.257 Firmware Activation Without Reset: N/A 00:35:19.257 Multiple Update Detection Support: N/A 00:35:19.257 Firmware Update Granularity: No Information Provided 00:35:19.257 Per-Namespace SMART Log: Yes 00:35:19.257 Asymmetric Namespace Access Log Page: Supported 00:35:19.257 ANA Transition Time : 10 sec 00:35:19.257 00:35:19.257 Asymmetric Namespace Access Capabilities 00:35:19.257 ANA Optimized State : Supported 00:35:19.257 ANA Non-Optimized State : Supported 00:35:19.257 ANA Inaccessible State : Supported 00:35:19.257 ANA Persistent Loss State : Supported 00:35:19.257 ANA Change State : Supported 00:35:19.257 ANAGRPID is not changed : No 00:35:19.257 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:19.257 00:35:19.257 ANA Group Identifier Maximum : 128 00:35:19.257 Number of ANA Group Identifiers : 128 00:35:19.257 Max Number of Allowed Namespaces : 1024 00:35:19.257 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:19.257 Command Effects Log Page: Supported 00:35:19.257 Get Log Page Extended Data: Supported 00:35:19.257 Telemetry Log Pages: Not Supported 00:35:19.257 Persistent Event Log Pages: Not Supported 00:35:19.257 Supported Log Pages Log Page: May Support 00:35:19.257 Commands Supported & Effects Log Page: Not Supported 00:35:19.257 Feature Identifiers & Effects Log Page:May Support 00:35:19.257 NVMe-MI Commands & Effects Log Page: May Support 00:35:19.257 Data Area 4 for Telemetry Log: Not Supported 00:35:19.257 Error Log Page Entries Supported: 128 00:35:19.257 Keep Alive: Supported 00:35:19.257 Keep Alive Granularity: 1000 ms 00:35:19.257 00:35:19.257 NVM Command Set Attributes 00:35:19.257 ========================== 00:35:19.257 Submission Queue Entry Size 00:35:19.257 Max: 64 00:35:19.257 Min: 64 00:35:19.257 Completion Queue Entry Size 00:35:19.257 Max: 16 00:35:19.257 Min: 16 00:35:19.257 Number of Namespaces: 1024 00:35:19.257 Compare Command: Not Supported 00:35:19.257 Write Uncorrectable Command: Not Supported 00:35:19.257 Dataset Management Command: Supported 00:35:19.257 Write Zeroes Command: Supported 00:35:19.257 Set Features Save Field: Not Supported 00:35:19.257 Reservations: Not Supported 00:35:19.257 Timestamp: Not Supported 00:35:19.257 Copy: Not Supported 00:35:19.257 Volatile Write Cache: Present 00:35:19.257 Atomic Write Unit (Normal): 1 00:35:19.257 Atomic Write Unit (PFail): 1 00:35:19.257 Atomic Compare & Write Unit: 1 00:35:19.257 Fused Compare & Write: Not Supported 00:35:19.257 Scatter-Gather List 00:35:19.257 SGL Command Set: Supported 00:35:19.257 SGL Keyed: Not Supported 00:35:19.257 SGL Bit Bucket Descriptor: Not Supported 00:35:19.257 SGL Metadata Pointer: Not Supported 00:35:19.257 Oversized SGL: Not Supported 00:35:19.257 SGL Metadata Address: Not Supported 00:35:19.257 SGL Offset: Supported 00:35:19.257 Transport SGL Data Block: Not Supported 00:35:19.257 Replay Protected Memory Block: Not Supported 00:35:19.257 00:35:19.257 Firmware Slot Information 00:35:19.257 ========================= 00:35:19.257 Active slot: 0 00:35:19.257 00:35:19.257 Asymmetric Namespace Access 00:35:19.257 =========================== 00:35:19.257 Change Count : 0 00:35:19.257 Number of ANA Group Descriptors : 1 00:35:19.257 ANA Group Descriptor : 0 00:35:19.257 ANA Group ID : 1 00:35:19.257 Number of NSID Values : 1 00:35:19.257 Change Count : 0 00:35:19.257 ANA State : 1 00:35:19.257 Namespace Identifier : 1 00:35:19.257 00:35:19.257 Commands Supported and Effects 00:35:19.257 ============================== 00:35:19.257 Admin Commands 00:35:19.257 -------------- 00:35:19.257 Get Log Page (02h): Supported 00:35:19.257 Identify (06h): Supported 00:35:19.257 Abort (08h): Supported 00:35:19.257 Set Features (09h): Supported 00:35:19.257 Get Features (0Ah): Supported 00:35:19.257 Asynchronous Event Request (0Ch): Supported 00:35:19.257 Keep Alive (18h): Supported 00:35:19.257 I/O Commands 00:35:19.257 ------------ 00:35:19.257 Flush (00h): Supported 00:35:19.257 Write (01h): Supported LBA-Change 00:35:19.257 Read (02h): Supported 00:35:19.257 Write Zeroes (08h): Supported LBA-Change 00:35:19.257 Dataset Management (09h): Supported 00:35:19.257 00:35:19.257 Error Log 00:35:19.257 ========= 00:35:19.257 Entry: 0 00:35:19.257 Error Count: 0x3 00:35:19.257 Submission Queue Id: 0x0 00:35:19.257 Command Id: 0x5 00:35:19.257 Phase Bit: 0 00:35:19.257 Status Code: 0x2 00:35:19.257 Status Code Type: 0x0 00:35:19.257 Do Not Retry: 1 00:35:19.257 Error Location: 0x28 00:35:19.257 LBA: 0x0 00:35:19.257 Namespace: 0x0 00:35:19.257 Vendor Log Page: 0x0 00:35:19.257 ----------- 00:35:19.257 Entry: 1 00:35:19.257 Error Count: 0x2 00:35:19.257 Submission Queue Id: 0x0 00:35:19.257 Command Id: 0x5 00:35:19.257 Phase Bit: 0 00:35:19.257 Status Code: 0x2 00:35:19.257 Status Code Type: 0x0 00:35:19.257 Do Not Retry: 1 00:35:19.257 Error Location: 0x28 00:35:19.257 LBA: 0x0 00:35:19.257 Namespace: 0x0 00:35:19.257 Vendor Log Page: 0x0 00:35:19.257 ----------- 00:35:19.257 Entry: 2 00:35:19.257 Error Count: 0x1 00:35:19.257 Submission Queue Id: 0x0 00:35:19.257 Command Id: 0x4 00:35:19.257 Phase Bit: 0 00:35:19.257 Status Code: 0x2 00:35:19.257 Status Code Type: 0x0 00:35:19.257 Do Not Retry: 1 00:35:19.257 Error Location: 0x28 00:35:19.257 LBA: 0x0 00:35:19.257 Namespace: 0x0 00:35:19.257 Vendor Log Page: 0x0 00:35:19.257 00:35:19.257 Number of Queues 00:35:19.257 ================ 00:35:19.257 Number of I/O Submission Queues: 128 00:35:19.257 Number of I/O Completion Queues: 128 00:35:19.257 00:35:19.257 ZNS Specific Controller Data 00:35:19.257 ============================ 00:35:19.257 Zone Append Size Limit: 0 00:35:19.257 00:35:19.257 00:35:19.257 Active Namespaces 00:35:19.257 ================= 00:35:19.257 get_feature(0x05) failed 00:35:19.257 Namespace ID:1 00:35:19.257 Command Set Identifier: NVM (00h) 00:35:19.257 Deallocate: Supported 00:35:19.257 Deallocated/Unwritten Error: Not Supported 00:35:19.257 Deallocated Read Value: Unknown 00:35:19.257 Deallocate in Write Zeroes: Not Supported 00:35:19.257 Deallocated Guard Field: 0xFFFF 00:35:19.257 Flush: Supported 00:35:19.257 Reservation: Not Supported 00:35:19.257 Namespace Sharing Capabilities: Multiple Controllers 00:35:19.257 Size (in LBAs): 1953525168 (931GiB) 00:35:19.257 Capacity (in LBAs): 1953525168 (931GiB) 00:35:19.257 Utilization (in LBAs): 1953525168 (931GiB) 00:35:19.257 UUID: 9fc0a0df-da4a-4af6-b704-69f7104496e0 00:35:19.257 Thin Provisioning: Not Supported 00:35:19.257 Per-NS Atomic Units: Yes 00:35:19.257 Atomic Boundary Size (Normal): 0 00:35:19.257 Atomic Boundary Size (PFail): 0 00:35:19.257 Atomic Boundary Offset: 0 00:35:19.257 NGUID/EUI64 Never Reused: No 00:35:19.257 ANA group ID: 1 00:35:19.257 Namespace Write Protected: No 00:35:19.257 Number of LBA Formats: 1 00:35:19.257 Current LBA Format: LBA Format #00 00:35:19.257 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:19.257 00:35:19.257 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:19.257 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:19.257 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:35:19.257 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:19.257 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:35:19.257 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:19.257 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:19.257 rmmod nvme_tcp 00:35:19.257 rmmod nvme_fabrics 00:35:19.257 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:19.257 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:35:19.257 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:35:19.257 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:19.257 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:19.257 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:19.257 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:19.257 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:35:19.258 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:35:19.258 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:19.258 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:19.516 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:19.516 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:19.516 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.516 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.516 01:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.423 01:12:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:21.423 01:12:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:21.423 01:12:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:21.423 01:12:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:35:21.423 01:12:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:21.423 01:12:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:21.423 01:12:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:21.423 01:12:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:21.423 01:12:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:21.423 01:12:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:21.423 01:12:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:22.798 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:22.798 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:22.798 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:22.798 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:22.798 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:22.798 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:22.798 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:22.798 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:22.798 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:22.798 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:22.798 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:22.798 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:22.798 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:22.798 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:22.798 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:22.798 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:23.732 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:23.990 00:35:23.990 real 0m9.785s 00:35:23.990 user 0m2.161s 00:35:23.990 sys 0m3.575s 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:23.990 ************************************ 00:35:23.990 END TEST nvmf_identify_kernel_target 00:35:23.990 ************************************ 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.990 ************************************ 00:35:23.990 START TEST nvmf_auth_host 00:35:23.990 ************************************ 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:23.990 * Looking for test storage... 00:35:23.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:23.990 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:23.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.991 --rc genhtml_branch_coverage=1 00:35:23.991 --rc genhtml_function_coverage=1 00:35:23.991 --rc genhtml_legend=1 00:35:23.991 --rc geninfo_all_blocks=1 00:35:23.991 --rc geninfo_unexecuted_blocks=1 00:35:23.991 00:35:23.991 ' 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:23.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.991 --rc genhtml_branch_coverage=1 00:35:23.991 --rc genhtml_function_coverage=1 00:35:23.991 --rc genhtml_legend=1 00:35:23.991 --rc geninfo_all_blocks=1 00:35:23.991 --rc geninfo_unexecuted_blocks=1 00:35:23.991 00:35:23.991 ' 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:23.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.991 --rc genhtml_branch_coverage=1 00:35:23.991 --rc genhtml_function_coverage=1 00:35:23.991 --rc genhtml_legend=1 00:35:23.991 --rc geninfo_all_blocks=1 00:35:23.991 --rc geninfo_unexecuted_blocks=1 00:35:23.991 00:35:23.991 ' 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:23.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.991 --rc genhtml_branch_coverage=1 00:35:23.991 --rc genhtml_function_coverage=1 00:35:23.991 --rc genhtml_legend=1 00:35:23.991 --rc geninfo_all_blocks=1 00:35:23.991 --rc geninfo_unexecuted_blocks=1 00:35:23.991 00:35:23.991 ' 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:23.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:35:23.991 01:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.888 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:25.888 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:35:25.888 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:25.888 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:25.888 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:25.888 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:25.888 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:25.888 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:35:25.888 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:25.888 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:35:25.888 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:35:25.888 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:35:25.888 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:35:25.888 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:35:25.888 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:35:25.888 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:25.889 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:25.889 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:25.889 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:25.889 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:25.889 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:26.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:26.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:35:26.147 00:35:26.147 --- 10.0.0.2 ping statistics --- 00:35:26.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.147 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:26.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:26.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:35:26.147 00:35:26.147 --- 10.0.0.1 ping statistics --- 00:35:26.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.147 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3119584 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3119584 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3119584 ']' 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:26.147 01:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.081 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:27.081 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:35:27.081 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:27.081 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:27.081 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.081 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:27.081 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:27.081 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:27.081 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.081 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.081 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.081 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:27.081 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:27.081 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:27.081 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0bb3a950c661dadce9c16cff376b4c78 00:35:27.082 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:27.082 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.RO7 00:35:27.082 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0bb3a950c661dadce9c16cff376b4c78 0 00:35:27.082 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0bb3a950c661dadce9c16cff376b4c78 0 00:35:27.082 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.082 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.082 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0bb3a950c661dadce9c16cff376b4c78 00:35:27.082 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:27.082 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.RO7 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.RO7 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.RO7 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ce220ddc3d191e5a2dc70afbab0ed5c93be404fc3a83a43d00fe5b04f8de0178 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.VYQ 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ce220ddc3d191e5a2dc70afbab0ed5c93be404fc3a83a43d00fe5b04f8de0178 3 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ce220ddc3d191e5a2dc70afbab0ed5c93be404fc3a83a43d00fe5b04f8de0178 3 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ce220ddc3d191e5a2dc70afbab0ed5c93be404fc3a83a43d00fe5b04f8de0178 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.VYQ 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.VYQ 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.VYQ 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fbc8734223fc5d20a929707d76a650a686f74c20aee7a9e1 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.WDR 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fbc8734223fc5d20a929707d76a650a686f74c20aee7a9e1 0 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fbc8734223fc5d20a929707d76a650a686f74c20aee7a9e1 0 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fbc8734223fc5d20a929707d76a650a686f74c20aee7a9e1 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.WDR 00:35:27.340 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.WDR 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.WDR 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4547cdd6bcf909aa15f8f8d90acd57fead172555487797b2 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.2oF 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4547cdd6bcf909aa15f8f8d90acd57fead172555487797b2 2 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4547cdd6bcf909aa15f8f8d90acd57fead172555487797b2 2 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4547cdd6bcf909aa15f8f8d90acd57fead172555487797b2 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.2oF 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.2oF 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.2oF 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9c4eefac54e1006bd6d96b6f267d4f95 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Iti 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9c4eefac54e1006bd6d96b6f267d4f95 1 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9c4eefac54e1006bd6d96b6f267d4f95 1 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9c4eefac54e1006bd6d96b6f267d4f95 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:27.341 01:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.341 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Iti 00:35:27.341 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Iti 00:35:27.341 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Iti 00:35:27.341 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:27.341 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.341 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.341 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.341 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:27.341 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:27.341 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:27.341 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=238bfcf429b71bab10144d9d744bebca 00:35:27.341 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:27.341 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.lCA 00:35:27.341 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 238bfcf429b71bab10144d9d744bebca 1 00:35:27.341 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 238bfcf429b71bab10144d9d744bebca 1 00:35:27.341 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.341 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.341 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=238bfcf429b71bab10144d9d744bebca 00:35:27.341 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:27.341 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.lCA 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.lCA 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.lCA 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c02f9e71ad3a34144c24505b6f74b2c5d641881a1d4c9db0 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.oAe 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c02f9e71ad3a34144c24505b6f74b2c5d641881a1d4c9db0 2 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c02f9e71ad3a34144c24505b6f74b2c5d641881a1d4c9db0 2 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c02f9e71ad3a34144c24505b6f74b2c5d641881a1d4c9db0 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.oAe 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.oAe 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.oAe 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=01f15678c0aedcb36fee47d7d85bcca3 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.sgn 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 01f15678c0aedcb36fee47d7d85bcca3 0 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 01f15678c0aedcb36fee47d7d85bcca3 0 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=01f15678c0aedcb36fee47d7d85bcca3 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.sgn 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.sgn 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.sgn 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ca6cd1f3713f67761efecb7884a76f07edfc1797eb989f2ff47e7ee6bfc2baf0 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ipL 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ca6cd1f3713f67761efecb7884a76f07edfc1797eb989f2ff47e7ee6bfc2baf0 3 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ca6cd1f3713f67761efecb7884a76f07edfc1797eb989f2ff47e7ee6bfc2baf0 3 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ca6cd1f3713f67761efecb7884a76f07edfc1797eb989f2ff47e7ee6bfc2baf0 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.599 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ipL 00:35:27.600 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ipL 00:35:27.600 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ipL 00:35:27.600 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:27.600 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3119584 00:35:27.600 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3119584 ']' 00:35:27.600 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.600 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:27.600 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.600 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:27.600 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RO7 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.VYQ ]] 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VYQ 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.WDR 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.2oF ]] 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2oF 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Iti 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.lCA ]] 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.lCA 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.oAe 00:35:27.858 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.116 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.116 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.116 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.sgn ]] 00:35:28.116 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.sgn 00:35:28.116 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.116 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.116 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.116 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:28.116 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ipL 00:35:28.116 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.116 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.116 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.116 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:28.116 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:28.116 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:28.116 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:28.116 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:28.116 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:28.116 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.116 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.116 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:28.117 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.117 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:28.117 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:28.117 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:28.117 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:28.117 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:28.117 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:28.117 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:28.117 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:28.117 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:28.117 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:35:28.117 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:28.117 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:28.117 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:28.117 01:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:29.491 Waiting for block devices as requested 00:35:29.491 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:29.491 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:29.491 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:29.491 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:29.748 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:29.748 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:29.748 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:30.006 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:30.006 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:30.006 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:30.006 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:30.263 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:30.264 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:30.264 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:30.264 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:30.522 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:30.522 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:30.780 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:30.780 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:30.780 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:30.780 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:30.780 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:30.780 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:30.780 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:30.780 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:30.780 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:30.780 No valid GPT data, bailing 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:31.038 00:35:31.038 Discovery Log Number of Records 2, Generation counter 2 00:35:31.038 =====Discovery Log Entry 0====== 00:35:31.038 trtype: tcp 00:35:31.038 adrfam: ipv4 00:35:31.038 subtype: current discovery subsystem 00:35:31.038 treq: not specified, sq flow control disable supported 00:35:31.038 portid: 1 00:35:31.038 trsvcid: 4420 00:35:31.038 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:31.038 traddr: 10.0.0.1 00:35:31.038 eflags: none 00:35:31.038 sectype: none 00:35:31.038 =====Discovery Log Entry 1====== 00:35:31.038 trtype: tcp 00:35:31.038 adrfam: ipv4 00:35:31.038 subtype: nvme subsystem 00:35:31.038 treq: not specified, sq flow control disable supported 00:35:31.038 portid: 1 00:35:31.038 trsvcid: 4420 00:35:31.038 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:31.038 traddr: 10.0.0.1 00:35:31.038 eflags: none 00:35:31.038 sectype: none 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: ]] 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.038 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:31.039 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.039 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.039 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.039 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.039 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.039 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.039 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.039 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.039 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.039 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.039 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.039 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.039 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.039 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.039 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:31.039 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.039 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.297 nvme0n1 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: ]] 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.297 01:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.556 nvme0n1 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: ]] 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.556 nvme0n1 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.556 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: ]] 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.815 nvme0n1 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.815 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: ]] 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.074 nvme0n1 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:32.074 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.336 nvme0n1 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: ]] 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.336 01:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.336 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.336 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.336 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:32.336 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:32.336 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:32.336 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.336 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.336 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:32.336 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.336 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:32.336 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:32.336 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:32.336 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:32.336 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.336 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.595 nvme0n1 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: ]] 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.595 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.853 nvme0n1 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: ]] 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.853 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.111 nvme0n1 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: ]] 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.111 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.370 nvme0n1 00:35:33.370 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.370 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.370 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.370 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.370 01:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.370 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.629 nvme0n1 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: ]] 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.629 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.195 nvme0n1 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: ]] 00:35:34.195 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.196 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.454 nvme0n1 00:35:34.454 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.454 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.454 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.454 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.454 01:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: ]] 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.454 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.713 nvme0n1 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: ]] 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:34.713 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:34.972 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:34.972 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.972 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.231 nvme0n1 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.231 01:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.491 nvme0n1 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: ]] 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.491 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.058 nvme0n1 00:35:36.058 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.058 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.058 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.058 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.058 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.058 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.058 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.058 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.058 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.058 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.058 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.058 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.058 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:36.058 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.058 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:36.058 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: ]] 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.059 01:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.683 nvme0n1 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: ]] 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.683 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.250 nvme0n1 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: ]] 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.250 01:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.815 nvme0n1 00:35:37.815 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.815 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.815 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.815 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.815 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.815 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.815 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.815 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.815 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.815 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.072 01:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.660 nvme0n1 00:35:38.660 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.660 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.660 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.660 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.660 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.660 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.660 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.660 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.660 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.660 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: ]] 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.661 01:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.616 nvme0n1 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: ]] 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:39.616 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:39.617 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:39.617 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.617 01:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.551 nvme0n1 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: ]] 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.551 01:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.484 nvme0n1 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: ]] 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:41.484 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.485 01:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.444 nvme0n1 00:35:42.444 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.444 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.444 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.444 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.444 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.444 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.444 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.444 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.444 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.444 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.701 01:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.631 nvme0n1 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: ]] 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.631 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.889 nvme0n1 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: ]] 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.889 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.890 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:43.890 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.890 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:43.890 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:43.890 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:43.890 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:43.890 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.890 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.147 nvme0n1 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: ]] 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.147 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.405 nvme0n1 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: ]] 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.405 01:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.405 nvme0n1 00:35:44.405 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.405 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.405 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.405 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.405 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.405 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.663 nvme0n1 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.663 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: ]] 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.922 nvme0n1 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.922 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: ]] 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.181 nvme0n1 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.181 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: ]] 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.440 01:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.440 nvme0n1 00:35:45.440 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.440 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.440 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.440 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.440 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.440 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: ]] 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.698 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.699 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:45.699 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:45.699 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:45.699 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.699 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.699 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:45.699 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.699 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:45.699 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:45.699 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:45.699 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:45.699 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.699 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.957 nvme0n1 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.957 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.216 nvme0n1 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: ]] 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.216 01:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.476 nvme0n1 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: ]] 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.476 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.735 nvme0n1 00:35:46.735 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.735 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.735 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.735 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.735 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.735 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.735 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.735 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.735 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.735 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: ]] 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.994 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.252 nvme0n1 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: ]] 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.253 01:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.512 nvme0n1 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.512 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.079 nvme0n1 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: ]] 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:48.079 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:48.080 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:48.080 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.080 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.080 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:48.080 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.080 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:48.080 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:48.080 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:48.080 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:48.080 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.080 01:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.647 nvme0n1 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: ]] 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.647 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.214 nvme0n1 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: ]] 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.214 01:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.780 nvme0n1 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: ]] 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:49.780 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:49.781 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:49.781 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.781 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.347 nvme0n1 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.347 01:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.915 nvme0n1 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: ]] 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.915 01:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.850 nvme0n1 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: ]] 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.850 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.851 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.851 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.851 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:51.851 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:51.851 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:51.851 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.851 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.851 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:51.851 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.851 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:51.851 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:51.851 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:51.851 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:51.851 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.851 01:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.782 nvme0n1 00:35:52.783 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.783 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.783 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.783 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.783 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.783 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: ]] 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.040 01:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.973 nvme0n1 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: ]] 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:53.973 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.974 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:53.974 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.974 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.974 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.974 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.974 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:53.974 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:53.974 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:53.974 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.974 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.974 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:53.974 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.974 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:53.974 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:53.974 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:53.974 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:53.974 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.974 01:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.905 nvme0n1 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.905 01:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.837 nvme0n1 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: ]] 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.837 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.095 nvme0n1 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: ]] 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.095 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.353 nvme0n1 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: ]] 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.353 01:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.611 nvme0n1 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: ]] 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.611 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.612 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.612 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.612 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.612 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.612 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.612 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.612 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.612 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.612 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.612 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.612 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.612 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:56.612 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.612 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.870 nvme0n1 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:56.870 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.871 nvme0n1 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.871 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.129 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.129 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.129 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.129 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.129 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.129 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.129 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.129 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:57.129 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.129 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:57.129 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.129 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.129 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:57.129 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:57.129 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:57.129 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:57.129 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.129 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:57.129 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: ]] 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.130 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.388 nvme0n1 00:35:57.388 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.388 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.388 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.388 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.388 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.388 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.388 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.388 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.388 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.388 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.388 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.388 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.388 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:57.388 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.388 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.388 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:57.388 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: ]] 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.389 01:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.648 nvme0n1 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: ]] 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.648 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.907 nvme0n1 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: ]] 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.907 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.908 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.908 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:57.908 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.908 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:57.908 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:57.908 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:57.908 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:57.908 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.908 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.167 nvme0n1 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.167 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:58.168 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:58.168 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:58.168 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:58.168 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.168 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.426 nvme0n1 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: ]] 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.427 01:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.686 nvme0n1 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: ]] 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.686 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.945 nvme0n1 00:35:58.945 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.945 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.945 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.945 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.945 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: ]] 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.203 01:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.462 nvme0n1 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: ]] 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.462 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.721 nvme0n1 00:35:59.721 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.721 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.721 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.721 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.721 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.721 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.721 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.721 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.721 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.721 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.979 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.980 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.980 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:59.980 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.980 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.980 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.980 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.980 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:59.980 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.980 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:59.980 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:59.980 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:59.980 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:59.980 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.980 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.238 nvme0n1 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: ]] 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.238 01:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.805 nvme0n1 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: ]] 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.805 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.371 nvme0n1 00:36:01.371 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.371 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.371 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.371 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.371 01:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.371 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.371 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.371 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.371 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.371 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.371 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: ]] 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.372 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.938 nvme0n1 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: ]] 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.938 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.196 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.196 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.196 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:02.196 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:02.196 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:02.196 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.196 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.196 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:02.196 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.196 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:02.196 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:02.196 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:02.196 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:02.196 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.196 01:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.454 nvme0n1 00:36:02.454 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.454 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.454 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.454 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.454 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.454 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.712 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.274 nvme0n1 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiM2E5NTBjNjYxZGFkY2U5YzE2Y2ZmMzc2YjRjNzhH0Zjs: 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: ]] 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UyMjBkZGMzZDE5MWU1YTJkYzcwYWZiYWIwZWQ1YzkzYmU0MDRmYzNhODNhNDNkMDBmZTViMDRmOGRlMDE3OG2R90Q=: 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.274 01:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.204 nvme0n1 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: ]] 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.204 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.205 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.205 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:04.205 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:04.205 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:04.205 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.205 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.205 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:04.205 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:04.205 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:04.205 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:04.205 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:04.205 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:04.205 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.205 01:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.138 nvme0n1 00:36:05.138 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.138 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.138 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.138 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.138 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: ]] 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.395 01:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.328 nvme0n1 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyZjllNzFhZDNhMzQxNDRjMjQ1MDViNmY3NGIyYzVkNjQxODgxYTFkNGM5ZGIw3oRY2g==: 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: ]] 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDFmMTU2NzhjMGFlZGNiMzZmZWU0N2Q3ZDg1YmNjYTPZZRVL: 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:06.328 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:06.329 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:06.329 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.329 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.329 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:06.329 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.329 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:06.329 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:06.329 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:06.329 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:06.329 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.329 01:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.264 nvme0n1 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E2Y2QxZjM3MTNmNjc3NjFlZmVjYjc4ODRhNzZmMDdlZGZjMTc5N2ViOTg5ZjJmZjQ3ZTdlZTZiZmMyYmFmMNVoH+I=: 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.264 01:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.233 nvme0n1 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: ]] 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.233 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.511 request: 00:36:08.511 { 00:36:08.511 "name": "nvme0", 00:36:08.511 "trtype": "tcp", 00:36:08.511 "traddr": "10.0.0.1", 00:36:08.511 "adrfam": "ipv4", 00:36:08.511 "trsvcid": "4420", 00:36:08.511 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:08.511 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:08.511 "prchk_reftag": false, 00:36:08.511 "prchk_guard": false, 00:36:08.511 "hdgst": false, 00:36:08.511 "ddgst": false, 00:36:08.511 "allow_unrecognized_csi": false, 00:36:08.511 "method": "bdev_nvme_attach_controller", 00:36:08.511 "req_id": 1 00:36:08.511 } 00:36:08.511 Got JSON-RPC error response 00:36:08.511 response: 00:36:08.511 { 00:36:08.511 "code": -5, 00:36:08.511 "message": "Input/output error" 00:36:08.511 } 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:08.511 01:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.511 request: 00:36:08.511 { 00:36:08.511 "name": "nvme0", 00:36:08.511 "trtype": "tcp", 00:36:08.511 "traddr": "10.0.0.1", 00:36:08.511 "adrfam": "ipv4", 00:36:08.511 "trsvcid": "4420", 00:36:08.511 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:08.511 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:08.511 "prchk_reftag": false, 00:36:08.511 "prchk_guard": false, 00:36:08.511 "hdgst": false, 00:36:08.511 "ddgst": false, 00:36:08.511 "dhchap_key": "key2", 00:36:08.511 "allow_unrecognized_csi": false, 00:36:08.511 "method": "bdev_nvme_attach_controller", 00:36:08.511 "req_id": 1 00:36:08.511 } 00:36:08.511 Got JSON-RPC error response 00:36:08.511 response: 00:36:08.511 { 00:36:08.511 "code": -5, 00:36:08.511 "message": "Input/output error" 00:36:08.511 } 00:36:08.511 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.512 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.773 request: 00:36:08.773 { 00:36:08.773 "name": "nvme0", 00:36:08.773 "trtype": "tcp", 00:36:08.773 "traddr": "10.0.0.1", 00:36:08.773 "adrfam": "ipv4", 00:36:08.773 "trsvcid": "4420", 00:36:08.773 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:08.773 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:08.773 "prchk_reftag": false, 00:36:08.773 "prchk_guard": false, 00:36:08.773 "hdgst": false, 00:36:08.773 "ddgst": false, 00:36:08.773 "dhchap_key": "key1", 00:36:08.773 "dhchap_ctrlr_key": "ckey2", 00:36:08.773 "allow_unrecognized_csi": false, 00:36:08.773 "method": "bdev_nvme_attach_controller", 00:36:08.773 "req_id": 1 00:36:08.773 } 00:36:08.773 Got JSON-RPC error response 00:36:08.773 response: 00:36:08.773 { 00:36:08.773 "code": -5, 00:36:08.773 "message": "Input/output error" 00:36:08.773 } 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.773 nvme0n1 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: ]] 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:08.773 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.774 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.032 request: 00:36:09.032 { 00:36:09.032 "name": "nvme0", 00:36:09.032 "dhchap_key": "key1", 00:36:09.032 "dhchap_ctrlr_key": "ckey2", 00:36:09.032 "method": "bdev_nvme_set_keys", 00:36:09.032 "req_id": 1 00:36:09.032 } 00:36:09.032 Got JSON-RPC error response 00:36:09.032 response: 00:36:09.032 { 00:36:09.032 "code": -13, 00:36:09.032 "message": "Permission denied" 00:36:09.032 } 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:09.032 01:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:09.967 01:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.967 01:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:09.967 01:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.967 01:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.967 01:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.967 01:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:09.967 01:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:11.340 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.340 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:11.340 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.340 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjODczNDIyM2ZjNWQyMGE5Mjk3MDdkNzZhNjUwYTY4NmY3NGMyMGFlZTdhOWUxdfAhpA==: 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: ]] 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU0N2NkZDZiY2Y5MDlhYTE1ZjhmOGQ5MGFjZDU3ZmVhZDE3MjU1NTQ4Nzc5N2IyUxWCHg==: 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.341 nvme0n1 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWM0ZWVmYWM1NGUxMDA2YmQ2ZDk2YjZmMjY3ZDRmOTWgGCwn: 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: ]] 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjM4YmZjZjQyOWI3MWJhYjEwMTQ0ZDlkNzQ0YmViY2GU1dcN: 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.341 request: 00:36:11.341 { 00:36:11.341 "name": "nvme0", 00:36:11.341 "dhchap_key": "key2", 00:36:11.341 "dhchap_ctrlr_key": "ckey1", 00:36:11.341 "method": "bdev_nvme_set_keys", 00:36:11.341 "req_id": 1 00:36:11.341 } 00:36:11.341 Got JSON-RPC error response 00:36:11.341 response: 00:36:11.341 { 00:36:11.341 "code": -13, 00:36:11.341 "message": "Permission denied" 00:36:11.341 } 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:11.341 01:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:12.717 01:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.717 01:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.717 01:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.717 01:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:12.717 01:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:12.717 rmmod nvme_tcp 00:36:12.717 rmmod nvme_fabrics 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3119584 ']' 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3119584 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3119584 ']' 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3119584 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3119584 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3119584' 00:36:12.717 killing process with pid 3119584 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3119584 00:36:12.717 01:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3119584 00:36:13.652 01:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:13.652 01:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:13.652 01:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:13.652 01:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:36:13.652 01:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:36:13.652 01:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:13.652 01:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:36:13.652 01:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:13.652 01:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:13.652 01:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:13.652 01:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:13.652 01:13:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.561 01:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:15.561 01:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:15.561 01:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:15.561 01:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:15.561 01:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:15.561 01:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:36:15.561 01:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:15.561 01:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:15.561 01:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:15.561 01:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:15.561 01:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:15.561 01:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:15.561 01:13:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:16.939 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:16.939 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:16.939 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:16.939 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:16.939 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:16.939 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:16.939 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:16.939 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:16.939 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:16.939 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:16.939 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:16.939 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:16.939 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:16.939 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:16.939 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:16.939 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:17.875 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:17.875 01:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.RO7 /tmp/spdk.key-null.WDR /tmp/spdk.key-sha256.Iti /tmp/spdk.key-sha384.oAe /tmp/spdk.key-sha512.ipL /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:17.875 01:13:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:18.809 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:18.809 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:18.809 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:18.809 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:18.809 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:18.809 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:18.809 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:18.809 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:18.809 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:18.809 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:18.809 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:18.809 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:18.809 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:18.809 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:18.809 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:18.809 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:18.809 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:19.068 00:36:19.068 real 0m55.214s 00:36:19.068 user 0m53.366s 00:36:19.068 sys 0m6.264s 00:36:19.068 01:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:19.068 01:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.068 ************************************ 00:36:19.068 END TEST nvmf_auth_host 00:36:19.068 ************************************ 00:36:19.068 01:13:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:36:19.068 01:13:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:19.068 01:13:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:19.068 01:13:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:19.068 01:13:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.068 ************************************ 00:36:19.068 START TEST nvmf_digest 00:36:19.068 ************************************ 00:36:19.068 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:19.327 * Looking for test storage... 00:36:19.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:19.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:19.327 --rc genhtml_branch_coverage=1 00:36:19.327 --rc genhtml_function_coverage=1 00:36:19.327 --rc genhtml_legend=1 00:36:19.327 --rc geninfo_all_blocks=1 00:36:19.327 --rc geninfo_unexecuted_blocks=1 00:36:19.327 00:36:19.327 ' 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:19.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:19.327 --rc genhtml_branch_coverage=1 00:36:19.327 --rc genhtml_function_coverage=1 00:36:19.327 --rc genhtml_legend=1 00:36:19.327 --rc geninfo_all_blocks=1 00:36:19.327 --rc geninfo_unexecuted_blocks=1 00:36:19.327 00:36:19.327 ' 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:19.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:19.327 --rc genhtml_branch_coverage=1 00:36:19.327 --rc genhtml_function_coverage=1 00:36:19.327 --rc genhtml_legend=1 00:36:19.327 --rc geninfo_all_blocks=1 00:36:19.327 --rc geninfo_unexecuted_blocks=1 00:36:19.327 00:36:19.327 ' 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:19.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:19.327 --rc genhtml_branch_coverage=1 00:36:19.327 --rc genhtml_function_coverage=1 00:36:19.327 --rc genhtml_legend=1 00:36:19.327 --rc geninfo_all_blocks=1 00:36:19.327 --rc geninfo_unexecuted_blocks=1 00:36:19.327 00:36:19.327 ' 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.327 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:19.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:36:19.328 01:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:21.858 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:21.858 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:21.858 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:21.858 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:21.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:21.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:36:21.858 00:36:21.858 --- 10.0.0.2 ping statistics --- 00:36:21.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:21.858 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:21.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:21.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:36:21.858 00:36:21.858 --- 10.0.0.1 ping statistics --- 00:36:21.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:21.858 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:21.858 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:21.859 ************************************ 00:36:21.859 START TEST nvmf_digest_clean 00:36:21.859 ************************************ 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3129739 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3129739 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3129739 ']' 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:21.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:21.859 01:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:21.859 [2024-12-06 01:13:54.307485] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:36:21.859 [2024-12-06 01:13:54.307648] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:21.859 [2024-12-06 01:13:54.452991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:22.116 [2024-12-06 01:13:54.585152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:22.117 [2024-12-06 01:13:54.585245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:22.117 [2024-12-06 01:13:54.585287] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:22.117 [2024-12-06 01:13:54.585326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:22.117 [2024-12-06 01:13:54.585356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:22.117 [2024-12-06 01:13:54.587136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:22.681 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:22.681 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:22.681 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:22.681 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:22.681 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:22.681 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:22.681 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:22.681 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:22.681 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:22.681 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.681 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:23.247 null0 00:36:23.247 [2024-12-06 01:13:55.726721] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:23.247 [2024-12-06 01:13:55.751034] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:23.247 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.247 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:23.247 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:23.247 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:23.247 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:23.247 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:23.247 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:23.247 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:23.247 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3129896 00:36:23.247 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:23.247 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3129896 /var/tmp/bperf.sock 00:36:23.247 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3129896 ']' 00:36:23.247 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:23.247 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:23.247 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:23.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:23.247 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:23.247 01:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:23.247 [2024-12-06 01:13:55.847606] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:36:23.247 [2024-12-06 01:13:55.847767] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3129896 ] 00:36:23.506 [2024-12-06 01:13:56.006900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:23.506 [2024-12-06 01:13:56.147443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:24.438 01:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:24.438 01:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:24.438 01:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:24.438 01:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:24.438 01:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:25.003 01:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:25.003 01:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:25.261 nvme0n1 00:36:25.261 01:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:25.261 01:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:25.519 Running I/O for 2 seconds... 00:36:27.389 14007.00 IOPS, 54.71 MiB/s [2024-12-06T00:14:00.098Z] 14495.50 IOPS, 56.62 MiB/s 00:36:27.389 Latency(us) 00:36:27.389 [2024-12-06T00:14:00.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:27.389 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:27.390 nvme0n1 : 2.01 14515.62 56.70 0.00 0.00 8805.29 4199.16 17961.72 00:36:27.390 [2024-12-06T00:14:00.099Z] =================================================================================================================== 00:36:27.390 [2024-12-06T00:14:00.099Z] Total : 14515.62 56.70 0.00 0.00 8805.29 4199.16 17961.72 00:36:27.390 { 00:36:27.390 "results": [ 00:36:27.390 { 00:36:27.390 "job": "nvme0n1", 00:36:27.390 "core_mask": "0x2", 00:36:27.390 "workload": "randread", 00:36:27.390 "status": "finished", 00:36:27.390 "queue_depth": 128, 00:36:27.390 "io_size": 4096, 00:36:27.390 "runtime": 2.006046, 00:36:27.390 "iops": 14515.61928290777, 00:36:27.390 "mibps": 56.701637823858476, 00:36:27.390 "io_failed": 0, 00:36:27.390 "io_timeout": 0, 00:36:27.390 "avg_latency_us": 8805.287867384539, 00:36:27.390 "min_latency_us": 4199.158518518519, 00:36:27.390 "max_latency_us": 17961.71851851852 00:36:27.390 } 00:36:27.390 ], 00:36:27.390 "core_count": 1 00:36:27.390 } 00:36:27.390 01:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:27.390 01:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:27.390 01:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:27.390 01:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:27.390 | select(.opcode=="crc32c") 00:36:27.390 | "\(.module_name) \(.executed)"' 00:36:27.390 01:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:27.648 01:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:27.648 01:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:27.648 01:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:27.648 01:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:27.648 01:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3129896 00:36:27.648 01:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3129896 ']' 00:36:27.648 01:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3129896 00:36:27.648 01:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:27.648 01:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:27.648 01:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3129896 00:36:27.648 01:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:27.648 01:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:27.648 01:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3129896' 00:36:27.648 killing process with pid 3129896 00:36:27.648 01:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3129896 00:36:27.648 Received shutdown signal, test time was about 2.000000 seconds 00:36:27.648 00:36:27.648 Latency(us) 00:36:27.648 [2024-12-06T00:14:00.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:27.648 [2024-12-06T00:14:00.357Z] =================================================================================================================== 00:36:27.648 [2024-12-06T00:14:00.357Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:27.648 01:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3129896 00:36:28.585 01:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:28.585 01:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:28.585 01:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:28.585 01:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:28.585 01:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:28.585 01:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:28.585 01:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:28.585 01:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3130558 00:36:28.585 01:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:28.585 01:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3130558 /var/tmp/bperf.sock 00:36:28.585 01:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3130558 ']' 00:36:28.585 01:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:28.585 01:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:28.585 01:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:28.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:28.585 01:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:28.585 01:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:28.844 [2024-12-06 01:14:01.333953] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:36:28.844 [2024-12-06 01:14:01.334080] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3130558 ] 00:36:28.844 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:28.844 Zero copy mechanism will not be used. 00:36:28.844 [2024-12-06 01:14:01.480674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:29.103 [2024-12-06 01:14:01.621593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:29.669 01:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:29.669 01:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:29.669 01:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:29.669 01:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:29.669 01:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:30.236 01:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:30.236 01:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:30.802 nvme0n1 00:36:30.803 01:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:30.803 01:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:30.803 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:30.803 Zero copy mechanism will not be used. 00:36:30.803 Running I/O for 2 seconds... 00:36:33.113 4252.00 IOPS, 531.50 MiB/s [2024-12-06T00:14:05.822Z] 4312.50 IOPS, 539.06 MiB/s 00:36:33.113 Latency(us) 00:36:33.113 [2024-12-06T00:14:05.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:33.113 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:33.113 nvme0n1 : 2.00 4311.07 538.88 0.00 0.00 3704.55 1165.08 8689.59 00:36:33.113 [2024-12-06T00:14:05.822Z] =================================================================================================================== 00:36:33.113 [2024-12-06T00:14:05.822Z] Total : 4311.07 538.88 0.00 0.00 3704.55 1165.08 8689.59 00:36:33.113 { 00:36:33.113 "results": [ 00:36:33.113 { 00:36:33.113 "job": "nvme0n1", 00:36:33.113 "core_mask": "0x2", 00:36:33.113 "workload": "randread", 00:36:33.113 "status": "finished", 00:36:33.113 "queue_depth": 16, 00:36:33.113 "io_size": 131072, 00:36:33.113 "runtime": 2.004376, 00:36:33.113 "iops": 4311.067384562577, 00:36:33.113 "mibps": 538.8834230703221, 00:36:33.113 "io_failed": 0, 00:36:33.113 "io_timeout": 0, 00:36:33.113 "avg_latency_us": 3704.553882395299, 00:36:33.113 "min_latency_us": 1165.0844444444444, 00:36:33.113 "max_latency_us": 8689.588148148148 00:36:33.113 } 00:36:33.113 ], 00:36:33.113 "core_count": 1 00:36:33.113 } 00:36:33.113 01:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:33.113 01:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:33.113 01:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:33.113 01:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:33.113 | select(.opcode=="crc32c") 00:36:33.113 | "\(.module_name) \(.executed)"' 00:36:33.113 01:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:33.113 01:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:33.113 01:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:33.113 01:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:33.113 01:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:33.113 01:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3130558 00:36:33.113 01:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3130558 ']' 00:36:33.113 01:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3130558 00:36:33.113 01:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:33.113 01:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:33.113 01:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3130558 00:36:33.113 01:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:33.113 01:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:33.113 01:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3130558' 00:36:33.113 killing process with pid 3130558 00:36:33.113 01:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3130558 00:36:33.113 Received shutdown signal, test time was about 2.000000 seconds 00:36:33.113 00:36:33.113 Latency(us) 00:36:33.113 [2024-12-06T00:14:05.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:33.113 [2024-12-06T00:14:05.822Z] =================================================================================================================== 00:36:33.113 [2024-12-06T00:14:05.822Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:33.113 01:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3130558 00:36:34.048 01:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:34.048 01:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:34.048 01:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:34.048 01:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:34.048 01:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:34.048 01:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:34.048 01:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:34.048 01:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3131229 00:36:34.048 01:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3131229 /var/tmp/bperf.sock 00:36:34.048 01:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:34.048 01:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3131229 ']' 00:36:34.048 01:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:34.048 01:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:34.048 01:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:34.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:34.048 01:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:34.048 01:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:34.048 [2024-12-06 01:14:06.755267] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:36:34.048 [2024-12-06 01:14:06.755389] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3131229 ] 00:36:34.307 [2024-12-06 01:14:06.901477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.564 [2024-12-06 01:14:07.036754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:35.169 01:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:35.169 01:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:35.169 01:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:35.169 01:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:35.169 01:14:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:35.733 01:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:35.733 01:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:36.298 nvme0n1 00:36:36.298 01:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:36.298 01:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:36.298 Running I/O for 2 seconds... 00:36:38.169 14238.00 IOPS, 55.62 MiB/s [2024-12-06T00:14:10.878Z] 14059.00 IOPS, 54.92 MiB/s 00:36:38.169 Latency(us) 00:36:38.169 [2024-12-06T00:14:10.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:38.169 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:38.169 nvme0n1 : 2.01 14065.37 54.94 0.00 0.00 9073.71 3980.71 16311.18 00:36:38.169 [2024-12-06T00:14:10.878Z] =================================================================================================================== 00:36:38.169 [2024-12-06T00:14:10.878Z] Total : 14065.37 54.94 0.00 0.00 9073.71 3980.71 16311.18 00:36:38.169 { 00:36:38.169 "results": [ 00:36:38.169 { 00:36:38.169 "job": "nvme0n1", 00:36:38.169 "core_mask": "0x2", 00:36:38.169 "workload": "randwrite", 00:36:38.169 "status": "finished", 00:36:38.169 "queue_depth": 128, 00:36:38.169 "io_size": 4096, 00:36:38.169 "runtime": 2.010469, 00:36:38.169 "iops": 14065.374795632262, 00:36:38.169 "mibps": 54.942870295438524, 00:36:38.169 "io_failed": 0, 00:36:38.169 "io_timeout": 0, 00:36:38.169 "avg_latency_us": 9073.708921527794, 00:36:38.169 "min_latency_us": 3980.705185185185, 00:36:38.169 "max_latency_us": 16311.182222222222 00:36:38.169 } 00:36:38.169 ], 00:36:38.169 "core_count": 1 00:36:38.169 } 00:36:38.169 01:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:38.169 01:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:38.169 01:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:38.169 01:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:38.169 | select(.opcode=="crc32c") 00:36:38.169 | "\(.module_name) \(.executed)"' 00:36:38.169 01:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:38.736 01:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:38.736 01:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:38.736 01:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:38.736 01:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:38.736 01:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3131229 00:36:38.736 01:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3131229 ']' 00:36:38.736 01:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3131229 00:36:38.736 01:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:38.736 01:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:38.736 01:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3131229 00:36:38.736 01:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:38.736 01:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:38.736 01:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3131229' 00:36:38.736 killing process with pid 3131229 00:36:38.736 01:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3131229 00:36:38.736 Received shutdown signal, test time was about 2.000000 seconds 00:36:38.736 00:36:38.736 Latency(us) 00:36:38.736 [2024-12-06T00:14:11.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:38.736 [2024-12-06T00:14:11.445Z] =================================================================================================================== 00:36:38.736 [2024-12-06T00:14:11.445Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:38.736 01:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3131229 00:36:39.672 01:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:39.672 01:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:39.672 01:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:39.672 01:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:39.672 01:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:39.672 01:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:39.672 01:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:39.672 01:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3131888 00:36:39.672 01:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:39.672 01:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3131888 /var/tmp/bperf.sock 00:36:39.672 01:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3131888 ']' 00:36:39.672 01:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:39.672 01:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:39.672 01:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:39.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:39.672 01:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:39.672 01:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:39.672 [2024-12-06 01:14:12.164464] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:36:39.672 [2024-12-06 01:14:12.164608] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3131888 ] 00:36:39.672 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:39.672 Zero copy mechanism will not be used. 00:36:39.672 [2024-12-06 01:14:12.304953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.929 [2024-12-06 01:14:12.440481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:40.495 01:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:40.495 01:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:40.495 01:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:40.495 01:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:40.495 01:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:41.104 01:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:41.104 01:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:41.380 nvme0n1 00:36:41.380 01:14:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:41.638 01:14:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:41.638 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:41.638 Zero copy mechanism will not be used. 00:36:41.638 Running I/O for 2 seconds... 00:36:43.505 4510.00 IOPS, 563.75 MiB/s [2024-12-06T00:14:16.214Z] 4582.00 IOPS, 572.75 MiB/s 00:36:43.505 Latency(us) 00:36:43.505 [2024-12-06T00:14:16.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:43.505 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:43.505 nvme0n1 : 2.00 4580.37 572.55 0.00 0.00 3482.65 2730.67 8398.32 00:36:43.505 [2024-12-06T00:14:16.214Z] =================================================================================================================== 00:36:43.505 [2024-12-06T00:14:16.214Z] Total : 4580.37 572.55 0.00 0.00 3482.65 2730.67 8398.32 00:36:43.505 { 00:36:43.505 "results": [ 00:36:43.505 { 00:36:43.505 "job": "nvme0n1", 00:36:43.505 "core_mask": "0x2", 00:36:43.505 "workload": "randwrite", 00:36:43.505 "status": "finished", 00:36:43.505 "queue_depth": 16, 00:36:43.505 "io_size": 131072, 00:36:43.505 "runtime": 2.004205, 00:36:43.505 "iops": 4580.3697725532065, 00:36:43.505 "mibps": 572.5462215691508, 00:36:43.505 "io_failed": 0, 00:36:43.505 "io_timeout": 0, 00:36:43.505 "avg_latency_us": 3482.6483750504317, 00:36:43.505 "min_latency_us": 2730.6666666666665, 00:36:43.505 "max_latency_us": 8398.317037037037 00:36:43.505 } 00:36:43.505 ], 00:36:43.505 "core_count": 1 00:36:43.505 } 00:36:43.764 01:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:43.764 01:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:43.764 01:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:43.764 01:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:43.764 | select(.opcode=="crc32c") 00:36:43.764 | "\(.module_name) \(.executed)"' 00:36:43.764 01:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:44.022 01:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:44.022 01:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:44.022 01:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:44.022 01:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:44.022 01:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3131888 00:36:44.022 01:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3131888 ']' 00:36:44.022 01:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3131888 00:36:44.022 01:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:44.022 01:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:44.022 01:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3131888 00:36:44.022 01:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:44.022 01:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:44.022 01:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3131888' 00:36:44.022 killing process with pid 3131888 00:36:44.022 01:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3131888 00:36:44.022 Received shutdown signal, test time was about 2.000000 seconds 00:36:44.022 00:36:44.022 Latency(us) 00:36:44.022 [2024-12-06T00:14:16.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.022 [2024-12-06T00:14:16.731Z] =================================================================================================================== 00:36:44.022 [2024-12-06T00:14:16.731Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:44.022 01:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3131888 00:36:44.958 01:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3129739 00:36:44.958 01:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3129739 ']' 00:36:44.958 01:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3129739 00:36:44.958 01:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:44.958 01:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:44.958 01:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3129739 00:36:44.958 01:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:44.958 01:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:44.958 01:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3129739' 00:36:44.958 killing process with pid 3129739 00:36:44.958 01:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3129739 00:36:44.958 01:14:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3129739 00:36:45.894 00:36:45.894 real 0m24.315s 00:36:45.894 user 0m47.799s 00:36:45.894 sys 0m4.667s 00:36:45.894 01:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:45.894 01:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:45.894 ************************************ 00:36:45.894 END TEST nvmf_digest_clean 00:36:45.894 ************************************ 00:36:45.894 01:14:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:45.894 01:14:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:45.894 01:14:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:45.894 01:14:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:45.894 ************************************ 00:36:45.894 START TEST nvmf_digest_error 00:36:45.894 ************************************ 00:36:45.894 01:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:36:45.894 01:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:45.894 01:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:45.894 01:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:45.894 01:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:45.894 01:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3132593 00:36:45.894 01:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:45.894 01:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3132593 00:36:45.894 01:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3132593 ']' 00:36:45.894 01:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:45.894 01:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:45.894 01:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:45.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:45.894 01:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:45.894 01:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:46.153 [2024-12-06 01:14:18.663559] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:36:46.153 [2024-12-06 01:14:18.663693] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:46.153 [2024-12-06 01:14:18.801704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:46.412 [2024-12-06 01:14:18.933673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:46.412 [2024-12-06 01:14:18.933770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:46.412 [2024-12-06 01:14:18.933809] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:46.412 [2024-12-06 01:14:18.933866] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:46.412 [2024-12-06 01:14:18.933898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:46.412 [2024-12-06 01:14:18.935634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:46.978 01:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:46.978 01:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:46.978 01:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:46.978 01:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:46.978 01:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:47.236 01:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:47.236 01:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:47.236 01:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.236 01:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:47.236 [2024-12-06 01:14:19.714499] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:47.236 01:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.236 01:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:47.236 01:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:47.236 01:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.236 01:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:47.495 null0 00:36:47.495 [2024-12-06 01:14:20.119417] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:47.495 [2024-12-06 01:14:20.143743] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:47.495 01:14:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.495 01:14:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:47.495 01:14:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:47.495 01:14:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:47.495 01:14:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:47.495 01:14:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:47.495 01:14:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3132871 00:36:47.495 01:14:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:47.495 01:14:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3132871 /var/tmp/bperf.sock 00:36:47.495 01:14:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3132871 ']' 00:36:47.495 01:14:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:47.495 01:14:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:47.495 01:14:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:47.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:47.495 01:14:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:47.495 01:14:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:47.753 [2024-12-06 01:14:20.237691] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:36:47.754 [2024-12-06 01:14:20.237845] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3132871 ] 00:36:47.754 [2024-12-06 01:14:20.382720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:48.012 [2024-12-06 01:14:20.520089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:48.578 01:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:48.578 01:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:48.578 01:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:48.578 01:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:48.836 01:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:48.836 01:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.836 01:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:48.836 01:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.836 01:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:48.836 01:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:49.403 nvme0n1 00:36:49.403 01:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:49.403 01:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.403 01:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:49.403 01:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.403 01:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:49.403 01:14:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:49.403 Running I/O for 2 seconds... 00:36:49.662 [2024-12-06 01:14:22.132298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.662 [2024-12-06 01:14:22.132380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-06 01:14:22.132414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-06 01:14:22.153106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.662 [2024-12-06 01:14:22.153157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-06 01:14:22.153187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-06 01:14:22.170401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.662 [2024-12-06 01:14:22.170451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-06 01:14:22.170481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-06 01:14:22.190326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.662 [2024-12-06 01:14:22.190386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-06 01:14:22.190416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-06 01:14:22.208131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.662 [2024-12-06 01:14:22.208180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-06 01:14:22.208211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-06 01:14:22.226773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.662 [2024-12-06 01:14:22.226833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-06 01:14:22.226866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-06 01:14:22.243533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.662 [2024-12-06 01:14:22.243583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-06 01:14:22.243613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-06 01:14:22.262179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.662 [2024-12-06 01:14:22.262236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-06 01:14:22.262267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-06 01:14:22.282452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.662 [2024-12-06 01:14:22.282502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-06 01:14:22.282532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-06 01:14:22.307890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.662 [2024-12-06 01:14:22.307947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-06 01:14:22.307973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-06 01:14:22.325710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.662 [2024-12-06 01:14:22.325760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-06 01:14:22.325789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-06 01:14:22.344355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.662 [2024-12-06 01:14:22.344404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-06 01:14:22.344444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.662 [2024-12-06 01:14:22.364695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.662 [2024-12-06 01:14:22.364744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.662 [2024-12-06 01:14:22.364774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.921 [2024-12-06 01:14:22.381680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.921 [2024-12-06 01:14:22.381731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.921 [2024-12-06 01:14:22.381760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.921 [2024-12-06 01:14:22.403123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.921 [2024-12-06 01:14:22.403171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.921 [2024-12-06 01:14:22.403201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.921 [2024-12-06 01:14:22.423679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.921 [2024-12-06 01:14:22.423734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.921 [2024-12-06 01:14:22.423764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.921 [2024-12-06 01:14:22.439066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.921 [2024-12-06 01:14:22.439114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.921 [2024-12-06 01:14:22.439144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.921 [2024-12-06 01:14:22.460463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.921 [2024-12-06 01:14:22.460512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.921 [2024-12-06 01:14:22.460542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.921 [2024-12-06 01:14:22.474476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.921 [2024-12-06 01:14:22.474524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.921 [2024-12-06 01:14:22.474553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.921 [2024-12-06 01:14:22.493646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.921 [2024-12-06 01:14:22.493694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.921 [2024-12-06 01:14:22.493723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.921 [2024-12-06 01:14:22.514652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.921 [2024-12-06 01:14:22.514710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.921 [2024-12-06 01:14:22.514741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.921 [2024-12-06 01:14:22.529123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.921 [2024-12-06 01:14:22.529171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.921 [2024-12-06 01:14:22.529200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.921 [2024-12-06 01:14:22.549747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.921 [2024-12-06 01:14:22.549795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.921 [2024-12-06 01:14:22.549835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.921 [2024-12-06 01:14:22.567270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.921 [2024-12-06 01:14:22.567318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.921 [2024-12-06 01:14:22.567348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.921 [2024-12-06 01:14:22.587729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.921 [2024-12-06 01:14:22.587778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.921 [2024-12-06 01:14:22.587809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.921 [2024-12-06 01:14:22.603564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.921 [2024-12-06 01:14:22.603612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.921 [2024-12-06 01:14:22.603642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.921 [2024-12-06 01:14:22.621556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.921 [2024-12-06 01:14:22.621605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.922 [2024-12-06 01:14:22.621634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.180 [2024-12-06 01:14:22.642407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.180 [2024-12-06 01:14:22.642456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.180 [2024-12-06 01:14:22.642486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.180 [2024-12-06 01:14:22.659735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.180 [2024-12-06 01:14:22.659783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.180 [2024-12-06 01:14:22.659830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.180 [2024-12-06 01:14:22.678330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.180 [2024-12-06 01:14:22.678381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.180 [2024-12-06 01:14:22.678411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.180 [2024-12-06 01:14:22.701017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.180 [2024-12-06 01:14:22.701065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.180 [2024-12-06 01:14:22.701112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.180 [2024-12-06 01:14:22.720077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.181 [2024-12-06 01:14:22.720125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.181 [2024-12-06 01:14:22.720155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.181 [2024-12-06 01:14:22.736782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.181 [2024-12-06 01:14:22.736841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.181 [2024-12-06 01:14:22.736873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.181 [2024-12-06 01:14:22.756525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.181 [2024-12-06 01:14:22.756587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.181 [2024-12-06 01:14:22.756620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.181 [2024-12-06 01:14:22.772536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.181 [2024-12-06 01:14:22.772585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.181 [2024-12-06 01:14:22.772614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.181 [2024-12-06 01:14:22.792269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.181 [2024-12-06 01:14:22.792318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.181 [2024-12-06 01:14:22.792348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.181 [2024-12-06 01:14:22.808987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.181 [2024-12-06 01:14:22.809035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.181 [2024-12-06 01:14:22.809064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.181 [2024-12-06 01:14:22.826798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.181 [2024-12-06 01:14:22.826865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.181 [2024-12-06 01:14:22.826896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.181 [2024-12-06 01:14:22.847036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.181 [2024-12-06 01:14:22.847085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.181 [2024-12-06 01:14:22.847116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.181 [2024-12-06 01:14:22.863751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.181 [2024-12-06 01:14:22.863799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.181 [2024-12-06 01:14:22.863844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.181 [2024-12-06 01:14:22.882203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.181 [2024-12-06 01:14:22.882252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.181 [2024-12-06 01:14:22.882280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.440 [2024-12-06 01:14:22.901719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.440 [2024-12-06 01:14:22.901775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.440 [2024-12-06 01:14:22.901805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.440 [2024-12-06 01:14:22.919380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.440 [2024-12-06 01:14:22.919429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.440 [2024-12-06 01:14:22.919459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.440 [2024-12-06 01:14:22.939599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.440 [2024-12-06 01:14:22.939647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.440 [2024-12-06 01:14:22.939676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.440 [2024-12-06 01:14:22.957278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.440 [2024-12-06 01:14:22.957327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.440 [2024-12-06 01:14:22.957357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.440 [2024-12-06 01:14:22.974341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.440 [2024-12-06 01:14:22.974390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.440 [2024-12-06 01:14:22.974419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.440 [2024-12-06 01:14:22.991527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.440 [2024-12-06 01:14:22.991583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.440 [2024-12-06 01:14:22.991617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.440 [2024-12-06 01:14:23.009071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.440 [2024-12-06 01:14:23.009119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.440 [2024-12-06 01:14:23.009148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.440 [2024-12-06 01:14:23.028370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.440 [2024-12-06 01:14:23.028419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.440 [2024-12-06 01:14:23.028449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.440 [2024-12-06 01:14:23.045570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.440 [2024-12-06 01:14:23.045618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.440 [2024-12-06 01:14:23.045647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.440 [2024-12-06 01:14:23.062316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.440 [2024-12-06 01:14:23.062365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.440 [2024-12-06 01:14:23.062395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.440 [2024-12-06 01:14:23.084246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.440 [2024-12-06 01:14:23.084295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.440 [2024-12-06 01:14:23.084324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.440 [2024-12-06 01:14:23.099693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.440 [2024-12-06 01:14:23.099742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.440 [2024-12-06 01:14:23.099772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.440 13584.00 IOPS, 53.06 MiB/s [2024-12-06T00:14:23.149Z] [2024-12-06 01:14:23.124329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.440 [2024-12-06 01:14:23.124378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.440 [2024-12-06 01:14:23.124408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.440 [2024-12-06 01:14:23.144120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.441 [2024-12-06 01:14:23.144189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.441 [2024-12-06 01:14:23.144224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.700 [2024-12-06 01:14:23.160629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.700 [2024-12-06 01:14:23.160677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.700 [2024-12-06 01:14:23.160706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.700 [2024-12-06 01:14:23.179695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.700 [2024-12-06 01:14:23.179743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.700 [2024-12-06 01:14:23.179772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.700 [2024-12-06 01:14:23.196365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.700 [2024-12-06 01:14:23.196415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.700 [2024-12-06 01:14:23.196445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.700 [2024-12-06 01:14:23.213946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.700 [2024-12-06 01:14:23.213994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.700 [2024-12-06 01:14:23.214023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.700 [2024-12-06 01:14:23.232264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.700 [2024-12-06 01:14:23.232311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.700 [2024-12-06 01:14:23.232341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.700 [2024-12-06 01:14:23.254659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.700 [2024-12-06 01:14:23.254708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.700 [2024-12-06 01:14:23.254737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.700 [2024-12-06 01:14:23.270682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.700 [2024-12-06 01:14:23.270731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.700 [2024-12-06 01:14:23.270760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.700 [2024-12-06 01:14:23.290571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.700 [2024-12-06 01:14:23.290621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.700 [2024-12-06 01:14:23.290659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.700 [2024-12-06 01:14:23.310822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.700 [2024-12-06 01:14:23.310870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.700 [2024-12-06 01:14:23.310899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.700 [2024-12-06 01:14:23.328421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.700 [2024-12-06 01:14:23.328470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.700 [2024-12-06 01:14:23.328500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.700 [2024-12-06 01:14:23.346554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.700 [2024-12-06 01:14:23.346603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.700 [2024-12-06 01:14:23.346633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.700 [2024-12-06 01:14:23.365182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.700 [2024-12-06 01:14:23.365231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.701 [2024-12-06 01:14:23.365260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.701 [2024-12-06 01:14:23.383039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.701 [2024-12-06 01:14:23.383096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.701 [2024-12-06 01:14:23.383134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.701 [2024-12-06 01:14:23.406454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.701 [2024-12-06 01:14:23.406504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.701 [2024-12-06 01:14:23.406534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.959 [2024-12-06 01:14:23.427636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.959 [2024-12-06 01:14:23.427687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.959 [2024-12-06 01:14:23.427716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.959 [2024-12-06 01:14:23.452957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.959 [2024-12-06 01:14:23.453007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.959 [2024-12-06 01:14:23.453037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.959 [2024-12-06 01:14:23.472430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.959 [2024-12-06 01:14:23.472496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.959 [2024-12-06 01:14:23.472528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.960 [2024-12-06 01:14:23.494451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.960 [2024-12-06 01:14:23.494515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.960 [2024-12-06 01:14:23.494551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.960 [2024-12-06 01:14:23.513087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.960 [2024-12-06 01:14:23.513142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.960 [2024-12-06 01:14:23.513181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.960 [2024-12-06 01:14:23.529591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.960 [2024-12-06 01:14:23.529640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.960 [2024-12-06 01:14:23.529670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.960 [2024-12-06 01:14:23.547988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.960 [2024-12-06 01:14:23.548041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.960 [2024-12-06 01:14:23.548073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.960 [2024-12-06 01:14:23.565718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.960 [2024-12-06 01:14:23.565768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.960 [2024-12-06 01:14:23.565797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.960 [2024-12-06 01:14:23.583363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.960 [2024-12-06 01:14:23.583412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.960 [2024-12-06 01:14:23.583442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.960 [2024-12-06 01:14:23.603121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.960 [2024-12-06 01:14:23.603170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.960 [2024-12-06 01:14:23.603200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.960 [2024-12-06 01:14:23.618731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.960 [2024-12-06 01:14:23.618779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.960 [2024-12-06 01:14:23.618809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.960 [2024-12-06 01:14:23.637690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.960 [2024-12-06 01:14:23.637738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.960 [2024-12-06 01:14:23.637768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.960 [2024-12-06 01:14:23.654462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.960 [2024-12-06 01:14:23.654510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.960 [2024-12-06 01:14:23.654539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.219 [2024-12-06 01:14:23.672411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.219 [2024-12-06 01:14:23.672459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.219 [2024-12-06 01:14:23.672489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.219 [2024-12-06 01:14:23.689534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.219 [2024-12-06 01:14:23.689584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.219 [2024-12-06 01:14:23.689614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.219 [2024-12-06 01:14:23.710206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.219 [2024-12-06 01:14:23.710255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.219 [2024-12-06 01:14:23.710284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.219 [2024-12-06 01:14:23.725640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.219 [2024-12-06 01:14:23.725690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.219 [2024-12-06 01:14:23.725719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.219 [2024-12-06 01:14:23.743238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.219 [2024-12-06 01:14:23.743287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.219 [2024-12-06 01:14:23.743317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.219 [2024-12-06 01:14:23.765714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.219 [2024-12-06 01:14:23.765763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.219 [2024-12-06 01:14:23.765792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.219 [2024-12-06 01:14:23.787533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.219 [2024-12-06 01:14:23.787595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.219 [2024-12-06 01:14:23.787625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.219 [2024-12-06 01:14:23.806263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.219 [2024-12-06 01:14:23.806312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.219 [2024-12-06 01:14:23.806342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.219 [2024-12-06 01:14:23.823886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.219 [2024-12-06 01:14:23.823933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.219 [2024-12-06 01:14:23.823963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.219 [2024-12-06 01:14:23.840514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.219 [2024-12-06 01:14:23.840563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.219 [2024-12-06 01:14:23.840591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.219 [2024-12-06 01:14:23.861254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.219 [2024-12-06 01:14:23.861303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.219 [2024-12-06 01:14:23.861333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.219 [2024-12-06 01:14:23.876601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.219 [2024-12-06 01:14:23.876650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.219 [2024-12-06 01:14:23.876680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.219 [2024-12-06 01:14:23.894596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.219 [2024-12-06 01:14:23.894645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.219 [2024-12-06 01:14:23.894692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.219 [2024-12-06 01:14:23.913493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.219 [2024-12-06 01:14:23.913542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.219 [2024-12-06 01:14:23.913572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.478 [2024-12-06 01:14:23.930912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.478 [2024-12-06 01:14:23.930961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.478 [2024-12-06 01:14:23.930989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.478 [2024-12-06 01:14:23.948538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.478 [2024-12-06 01:14:23.948587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.478 [2024-12-06 01:14:23.948616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.478 [2024-12-06 01:14:23.965858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.478 [2024-12-06 01:14:23.965906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.478 [2024-12-06 01:14:23.965934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.478 [2024-12-06 01:14:23.983470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.478 [2024-12-06 01:14:23.983520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.478 [2024-12-06 01:14:23.983549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.478 [2024-12-06 01:14:24.000750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.478 [2024-12-06 01:14:24.000799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.478 [2024-12-06 01:14:24.000838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.479 [2024-12-06 01:14:24.018161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.479 [2024-12-06 01:14:24.018210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.479 [2024-12-06 01:14:24.018239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.479 [2024-12-06 01:14:24.035613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.479 [2024-12-06 01:14:24.035671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.479 [2024-12-06 01:14:24.035714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.479 [2024-12-06 01:14:24.052844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.479 [2024-12-06 01:14:24.052892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.479 [2024-12-06 01:14:24.052921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.479 [2024-12-06 01:14:24.070257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.479 [2024-12-06 01:14:24.070309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.479 [2024-12-06 01:14:24.070339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.479 [2024-12-06 01:14:24.087551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.479 [2024-12-06 01:14:24.087616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.479 [2024-12-06 01:14:24.087647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.479 [2024-12-06 01:14:24.104758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.479 [2024-12-06 01:14:24.104824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.479 [2024-12-06 01:14:24.104865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.479 13678.50 IOPS, 53.43 MiB/s 00:36:51.479 Latency(us) 00:36:51.479 [2024-12-06T00:14:24.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:51.479 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:51.479 nvme0n1 : 2.01 13707.52 53.54 0.00 0.00 9322.59 4684.61 32234.00 00:36:51.479 [2024-12-06T00:14:24.188Z] =================================================================================================================== 00:36:51.479 [2024-12-06T00:14:24.188Z] Total : 13707.52 53.54 0.00 0.00 9322.59 4684.61 32234.00 00:36:51.479 { 00:36:51.479 "results": [ 00:36:51.479 { 00:36:51.479 "job": "nvme0n1", 00:36:51.479 "core_mask": "0x2", 00:36:51.479 "workload": "randread", 00:36:51.479 "status": "finished", 00:36:51.479 "queue_depth": 128, 00:36:51.479 "io_size": 4096, 00:36:51.479 "runtime": 2.007366, 00:36:51.479 "iops": 13707.515221439438, 00:36:51.479 "mibps": 53.544981333747806, 00:36:51.479 "io_failed": 0, 00:36:51.479 "io_timeout": 0, 00:36:51.479 "avg_latency_us": 9322.593298552221, 00:36:51.479 "min_latency_us": 4684.61037037037, 00:36:51.479 "max_latency_us": 32234.002962962964 00:36:51.479 } 00:36:51.479 ], 00:36:51.479 "core_count": 1 00:36:51.479 } 00:36:51.479 01:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:51.479 01:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:51.479 01:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:51.479 01:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:51.479 | .driver_specific 00:36:51.479 | .nvme_error 00:36:51.479 | .status_code 00:36:51.479 | .command_transient_transport_error' 00:36:51.737 01:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 107 > 0 )) 00:36:51.738 01:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3132871 00:36:51.738 01:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3132871 ']' 00:36:51.738 01:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3132871 00:36:51.738 01:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:51.738 01:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:51.738 01:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3132871 00:36:51.997 01:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:51.997 01:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:51.997 01:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3132871' 00:36:51.997 killing process with pid 3132871 00:36:51.997 01:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3132871 00:36:51.997 Received shutdown signal, test time was about 2.000000 seconds 00:36:51.997 00:36:51.997 Latency(us) 00:36:51.997 [2024-12-06T00:14:24.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:51.997 [2024-12-06T00:14:24.706Z] =================================================================================================================== 00:36:51.997 [2024-12-06T00:14:24.706Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:51.997 01:14:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3132871 00:36:52.931 01:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:52.931 01:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:52.931 01:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:52.931 01:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:52.931 01:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:52.931 01:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3133413 00:36:52.931 01:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:52.931 01:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3133413 /var/tmp/bperf.sock 00:36:52.931 01:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3133413 ']' 00:36:52.931 01:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:52.931 01:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:52.931 01:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:52.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:52.931 01:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:52.931 01:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:52.931 [2024-12-06 01:14:25.475661] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:36:52.931 [2024-12-06 01:14:25.475789] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3133413 ] 00:36:52.931 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:52.931 Zero copy mechanism will not be used. 00:36:52.931 [2024-12-06 01:14:25.618382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.189 [2024-12-06 01:14:25.758459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:54.122 01:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:54.122 01:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:54.122 01:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:54.123 01:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:54.123 01:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:54.123 01:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.123 01:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:54.123 01:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.123 01:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:54.123 01:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:54.689 nvme0n1 00:36:54.689 01:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:54.689 01:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.689 01:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:54.689 01:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.689 01:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:54.689 01:14:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:54.947 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:54.947 Zero copy mechanism will not be used. 00:36:54.947 Running I/O for 2 seconds... 00:36:54.947 [2024-12-06 01:14:27.415042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.415158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.415196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.422380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.422430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.422460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.428976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.429018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.429046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.435444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.435492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.435521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.442034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.442075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.442120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.448525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.448571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.448601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.454986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.455028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.455054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.461377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.461423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.461452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.467936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.467975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.468015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.474299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.474362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.474391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.480797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.480868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.480896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.487355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.487402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.487432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.494448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.494494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.494523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.500954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.500997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.501023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.506975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.507025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.507052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.511323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.511369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.511398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.518823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.518884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.518909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.527235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.527284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.527314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.534528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.534575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.534603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.541831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.541891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.541916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.549231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.549279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.549308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.556519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.556565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.556594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.563691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.563738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.563768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.570559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.570606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.570636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.578644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.578691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.578720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.586980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.587021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.587047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.594385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.594432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.594462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.602228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.602276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.602305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.608530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.608577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.608607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.614672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.947 [2024-12-06 01:14:27.614717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.947 [2024-12-06 01:14:27.614746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.947 [2024-12-06 01:14:27.621018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.948 [2024-12-06 01:14:27.621059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.948 [2024-12-06 01:14:27.621084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.948 [2024-12-06 01:14:27.627358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.948 [2024-12-06 01:14:27.627417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.948 [2024-12-06 01:14:27.627447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.948 [2024-12-06 01:14:27.633665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.948 [2024-12-06 01:14:27.633711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.948 [2024-12-06 01:14:27.633740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.948 [2024-12-06 01:14:27.640055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.948 [2024-12-06 01:14:27.640094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.948 [2024-12-06 01:14:27.640136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.948 [2024-12-06 01:14:27.646409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.948 [2024-12-06 01:14:27.646455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.948 [2024-12-06 01:14:27.646485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.948 [2024-12-06 01:14:27.652701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.948 [2024-12-06 01:14:27.652747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.948 [2024-12-06 01:14:27.652776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.207 [2024-12-06 01:14:27.659265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.207 [2024-12-06 01:14:27.659312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.207 [2024-12-06 01:14:27.659341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.207 [2024-12-06 01:14:27.665704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.207 [2024-12-06 01:14:27.665750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.207 [2024-12-06 01:14:27.665780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.207 [2024-12-06 01:14:27.671940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.207 [2024-12-06 01:14:27.671982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.207 [2024-12-06 01:14:27.672008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.207 [2024-12-06 01:14:27.678210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.207 [2024-12-06 01:14:27.678257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.207 [2024-12-06 01:14:27.678286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.207 [2024-12-06 01:14:27.684560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.207 [2024-12-06 01:14:27.684606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.207 [2024-12-06 01:14:27.684635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.207 [2024-12-06 01:14:27.690934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.207 [2024-12-06 01:14:27.690975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.207 [2024-12-06 01:14:27.691001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.207 [2024-12-06 01:14:27.697277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.207 [2024-12-06 01:14:27.697323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.207 [2024-12-06 01:14:27.697352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.207 [2024-12-06 01:14:27.703638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.703684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.703713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.710100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.710166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.710195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.716500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.716546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.716574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.722922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.722960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.722984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.729230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.729277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.729306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.735546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.735594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.735633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.741397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.741447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.741477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.745400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.745442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.745468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.750526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.750572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.750602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.755174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.755220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.755249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.759036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.759077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.759113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.764575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.764620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.764648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.771030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.771085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.771121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.777498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.777545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.777574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.783980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.784019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.784043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.790326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.790372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.790401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.796673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.796718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.796747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.803024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.803066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.803092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.809351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.809397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.809426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.815668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.815715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.815744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.822068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.822109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.822134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.828195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.828240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.828269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.834488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.834533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.834572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.840810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.840879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.840905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.847182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.847229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.847258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.853609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.853655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.853684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.208 [2024-12-06 01:14:27.859946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.208 [2024-12-06 01:14:27.860002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.208 [2024-12-06 01:14:27.860042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.209 [2024-12-06 01:14:27.866223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.209 [2024-12-06 01:14:27.866269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.209 [2024-12-06 01:14:27.866298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.209 [2024-12-06 01:14:27.872611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.209 [2024-12-06 01:14:27.872656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.209 [2024-12-06 01:14:27.872686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.209 [2024-12-06 01:14:27.879034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.209 [2024-12-06 01:14:27.879073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.209 [2024-12-06 01:14:27.879097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.209 [2024-12-06 01:14:27.885549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.209 [2024-12-06 01:14:27.885595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.209 [2024-12-06 01:14:27.885625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.209 [2024-12-06 01:14:27.893277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.209 [2024-12-06 01:14:27.893325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.209 [2024-12-06 01:14:27.893354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.209 [2024-12-06 01:14:27.902069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.209 [2024-12-06 01:14:27.902126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.209 [2024-12-06 01:14:27.902166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.209 [2024-12-06 01:14:27.909764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.209 [2024-12-06 01:14:27.909812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.209 [2024-12-06 01:14:27.909867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.468 [2024-12-06 01:14:27.917683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.468 [2024-12-06 01:14:27.917733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.468 [2024-12-06 01:14:27.917763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.468 [2024-12-06 01:14:27.923838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.468 [2024-12-06 01:14:27.923899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.468 [2024-12-06 01:14:27.923926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.468 [2024-12-06 01:14:27.930133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.468 [2024-12-06 01:14:27.930180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.468 [2024-12-06 01:14:27.930209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.468 [2024-12-06 01:14:27.936533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.468 [2024-12-06 01:14:27.936579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.468 [2024-12-06 01:14:27.936608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:27.942794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:27.942864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:27.942891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:27.949068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:27.949129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:27.949168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:27.955436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:27.955484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:27.955513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:27.961902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:27.961958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:27.961985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:27.968118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:27.968175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:27.968217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:27.974881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:27.974929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:27.974957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:27.981489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:27.981537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:27.981566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:27.988069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:27.988127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:27.988168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:27.995060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:27.995119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:27.995144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:28.001408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:28.001455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:28.001485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:28.007779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:28.007844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:28.007888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:28.014131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:28.014179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:28.014208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:28.020514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:28.020561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:28.020590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:28.026994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:28.027035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:28.027102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:28.033311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:28.033358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:28.033388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:28.039667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:28.039714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:28.039744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:28.046105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:28.046147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:28.046191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:28.052447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:28.052504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:28.052534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:28.058771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:28.058827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:28.058881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:28.065203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:28.065249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:28.065279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:28.071528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:28.071584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:28.071613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:28.077999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:28.078040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:28.078080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:28.084333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:28.084380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:28.084409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:28.090454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:28.090501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:28.090531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:28.096703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:28.096750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:28.096779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:28.103076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.469 [2024-12-06 01:14:28.103135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.469 [2024-12-06 01:14:28.103165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.469 [2024-12-06 01:14:28.109260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.470 [2024-12-06 01:14:28.109305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.470 [2024-12-06 01:14:28.109334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.470 [2024-12-06 01:14:28.115555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.470 [2024-12-06 01:14:28.115615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.470 [2024-12-06 01:14:28.115645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.470 [2024-12-06 01:14:28.121903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.470 [2024-12-06 01:14:28.121942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.470 [2024-12-06 01:14:28.121966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.470 [2024-12-06 01:14:28.128253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.470 [2024-12-06 01:14:28.128300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.470 [2024-12-06 01:14:28.128331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.470 [2024-12-06 01:14:28.134604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.470 [2024-12-06 01:14:28.134650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.470 [2024-12-06 01:14:28.134680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.470 [2024-12-06 01:14:28.140952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.470 [2024-12-06 01:14:28.140994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.470 [2024-12-06 01:14:28.141021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.470 [2024-12-06 01:14:28.147287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.470 [2024-12-06 01:14:28.147334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.470 [2024-12-06 01:14:28.147363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.470 [2024-12-06 01:14:28.153731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.470 [2024-12-06 01:14:28.153777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.470 [2024-12-06 01:14:28.153807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.470 [2024-12-06 01:14:28.160719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.470 [2024-12-06 01:14:28.160765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.470 [2024-12-06 01:14:28.160795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.470 [2024-12-06 01:14:28.167316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.470 [2024-12-06 01:14:28.167363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.470 [2024-12-06 01:14:28.167402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.470 [2024-12-06 01:14:28.173777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.470 [2024-12-06 01:14:28.173835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.470 [2024-12-06 01:14:28.173883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.729 [2024-12-06 01:14:28.179965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.729 [2024-12-06 01:14:28.180008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.729 [2024-12-06 01:14:28.180036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.729 [2024-12-06 01:14:28.186382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.729 [2024-12-06 01:14:28.186429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.729 [2024-12-06 01:14:28.186459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.729 [2024-12-06 01:14:28.192638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.729 [2024-12-06 01:14:28.192683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.729 [2024-12-06 01:14:28.192711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.729 [2024-12-06 01:14:28.199197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.729 [2024-12-06 01:14:28.199244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.729 [2024-12-06 01:14:28.199274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.729 [2024-12-06 01:14:28.205591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.729 [2024-12-06 01:14:28.205638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.729 [2024-12-06 01:14:28.205667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.729 [2024-12-06 01:14:28.211959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.729 [2024-12-06 01:14:28.212017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.729 [2024-12-06 01:14:28.212043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.729 [2024-12-06 01:14:28.218338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.729 [2024-12-06 01:14:28.218385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.729 [2024-12-06 01:14:28.218414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.729 [2024-12-06 01:14:28.224687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.729 [2024-12-06 01:14:28.224744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.729 [2024-12-06 01:14:28.224774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.729 [2024-12-06 01:14:28.231015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.729 [2024-12-06 01:14:28.231068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.729 [2024-12-06 01:14:28.231093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.729 [2024-12-06 01:14:28.237311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.729 [2024-12-06 01:14:28.237357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.729 [2024-12-06 01:14:28.237387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.729 [2024-12-06 01:14:28.243525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.729 [2024-12-06 01:14:28.243571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.729 [2024-12-06 01:14:28.243600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.729 [2024-12-06 01:14:28.249824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.729 [2024-12-06 01:14:28.249890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.729 [2024-12-06 01:14:28.249917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.729 [2024-12-06 01:14:28.256252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.729 [2024-12-06 01:14:28.256299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.729 [2024-12-06 01:14:28.256328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.729 [2024-12-06 01:14:28.262695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.262742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.262770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.269093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.269151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.269192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.275497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.275549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.275578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.282017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.282074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.282100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.288283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.288330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.288360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.294761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.294807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.294848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.301217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.301282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.301311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.307566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.307613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.307642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.313919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.313976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.314017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.320241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.320286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.320315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.327416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.327464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.327494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.335613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.335670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.335701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.343870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.343928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.343956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.351105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.351164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.351195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.358490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.358538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.358568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.365969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.366011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.366038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.371807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.371876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.371903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.376898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.376940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.376968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.384301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.384350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.384380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.392075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.392119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.392147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.400324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.400373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.400403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.407603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.407650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.407680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.730 4724.00 IOPS, 590.50 MiB/s [2024-12-06T00:14:28.439Z] [2024-12-06 01:14:28.416485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.416534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.416564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.423308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.423356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.423385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.429647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.429695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.429725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.730 [2024-12-06 01:14:28.434160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.730 [2024-12-06 01:14:28.434201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.730 [2024-12-06 01:14:28.434229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.439067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.439109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.991 [2024-12-06 01:14:28.439154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.445113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.445170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.991 [2024-12-06 01:14:28.445201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.451404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.451466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.991 [2024-12-06 01:14:28.451496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.457663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.457710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.991 [2024-12-06 01:14:28.457740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.464025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.464067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.991 [2024-12-06 01:14:28.464094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.470464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.470511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.991 [2024-12-06 01:14:28.470540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.476944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.476988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.991 [2024-12-06 01:14:28.477030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.483420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.483466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.991 [2024-12-06 01:14:28.483496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.489790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.489860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.991 [2024-12-06 01:14:28.489888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.496204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.496252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.991 [2024-12-06 01:14:28.496281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.502430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.502477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.991 [2024-12-06 01:14:28.502506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.508690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.508736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.991 [2024-12-06 01:14:28.508766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.514999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.515041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.991 [2024-12-06 01:14:28.515067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.521454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.521502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.991 [2024-12-06 01:14:28.521531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.527757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.527803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.991 [2024-12-06 01:14:28.527856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.534173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.534220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.991 [2024-12-06 01:14:28.534249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.540599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.540645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.991 [2024-12-06 01:14:28.540674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.547046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.547088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.991 [2024-12-06 01:14:28.547114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.553402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.553447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.991 [2024-12-06 01:14:28.553476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.559865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.559908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.991 [2024-12-06 01:14:28.559959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.566084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.566141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.991 [2024-12-06 01:14:28.566172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.991 [2024-12-06 01:14:28.572480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.991 [2024-12-06 01:14:28.572527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.992 [2024-12-06 01:14:28.572556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.992 [2024-12-06 01:14:28.578976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.992 [2024-12-06 01:14:28.579018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.992 [2024-12-06 01:14:28.579044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.992 [2024-12-06 01:14:28.585315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.992 [2024-12-06 01:14:28.585361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.992 [2024-12-06 01:14:28.585389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.992 [2024-12-06 01:14:28.591534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.992 [2024-12-06 01:14:28.591580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.992 [2024-12-06 01:14:28.591610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.992 [2024-12-06 01:14:28.597872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.992 [2024-12-06 01:14:28.597915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.992 [2024-12-06 01:14:28.597941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.992 [2024-12-06 01:14:28.604093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.992 [2024-12-06 01:14:28.604153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.992 [2024-12-06 01:14:28.604181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.992 [2024-12-06 01:14:28.610449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.992 [2024-12-06 01:14:28.610496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.992 [2024-12-06 01:14:28.610525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.992 [2024-12-06 01:14:28.616908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.992 [2024-12-06 01:14:28.616949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.992 [2024-12-06 01:14:28.616976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.992 [2024-12-06 01:14:28.623434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.992 [2024-12-06 01:14:28.623480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.992 [2024-12-06 01:14:28.623509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.992 [2024-12-06 01:14:28.629767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.992 [2024-12-06 01:14:28.629822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.992 [2024-12-06 01:14:28.629872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.992 [2024-12-06 01:14:28.636048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.992 [2024-12-06 01:14:28.636088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.992 [2024-12-06 01:14:28.636126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.992 [2024-12-06 01:14:28.642406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.992 [2024-12-06 01:14:28.642452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.992 [2024-12-06 01:14:28.642481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.992 [2024-12-06 01:14:28.648725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.992 [2024-12-06 01:14:28.648770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.992 [2024-12-06 01:14:28.648800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.992 [2024-12-06 01:14:28.655063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.992 [2024-12-06 01:14:28.655123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.992 [2024-12-06 01:14:28.655152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.992 [2024-12-06 01:14:28.661293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.992 [2024-12-06 01:14:28.661339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.992 [2024-12-06 01:14:28.661367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.992 [2024-12-06 01:14:28.667809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.992 [2024-12-06 01:14:28.667877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.992 [2024-12-06 01:14:28.667913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.992 [2024-12-06 01:14:28.674144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.992 [2024-12-06 01:14:28.674190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.992 [2024-12-06 01:14:28.674219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.992 [2024-12-06 01:14:28.680458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.992 [2024-12-06 01:14:28.680503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.992 [2024-12-06 01:14:28.680531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.992 [2024-12-06 01:14:28.686714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.992 [2024-12-06 01:14:28.686760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.992 [2024-12-06 01:14:28.686788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.992 [2024-12-06 01:14:28.692992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.992 [2024-12-06 01:14:28.693033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.992 [2024-12-06 01:14:28.693060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.253 [2024-12-06 01:14:28.698926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.253 [2024-12-06 01:14:28.698971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.253 [2024-12-06 01:14:28.698997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.253 [2024-12-06 01:14:28.702757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.253 [2024-12-06 01:14:28.702802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.253 [2024-12-06 01:14:28.702841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.253 [2024-12-06 01:14:28.709053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.253 [2024-12-06 01:14:28.709093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.253 [2024-12-06 01:14:28.709135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.253 [2024-12-06 01:14:28.715359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.253 [2024-12-06 01:14:28.715405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.253 [2024-12-06 01:14:28.715434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.253 [2024-12-06 01:14:28.721744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.253 [2024-12-06 01:14:28.721792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.253 [2024-12-06 01:14:28.721829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.253 [2024-12-06 01:14:28.728108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.253 [2024-12-06 01:14:28.728167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.253 [2024-12-06 01:14:28.728196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.253 [2024-12-06 01:14:28.734477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.253 [2024-12-06 01:14:28.734524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.253 [2024-12-06 01:14:28.734554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.253 [2024-12-06 01:14:28.740803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.253 [2024-12-06 01:14:28.740873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.253 [2024-12-06 01:14:28.740899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.253 [2024-12-06 01:14:28.747363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.253 [2024-12-06 01:14:28.747410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.253 [2024-12-06 01:14:28.747439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.253 [2024-12-06 01:14:28.754291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.253 [2024-12-06 01:14:28.754340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.253 [2024-12-06 01:14:28.754369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.253 [2024-12-06 01:14:28.762277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.253 [2024-12-06 01:14:28.762326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.253 [2024-12-06 01:14:28.762357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.253 [2024-12-06 01:14:28.769766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.253 [2024-12-06 01:14:28.769823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.253 [2024-12-06 01:14:28.769880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.253 [2024-12-06 01:14:28.776959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.253 [2024-12-06 01:14:28.776999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.253 [2024-12-06 01:14:28.777034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.253 [2024-12-06 01:14:28.784630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.253 [2024-12-06 01:14:28.784678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.784709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.792365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.792415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.792445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.799827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.799886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.799911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.807410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.807458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.807488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.814900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.814957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.814982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.822344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.822405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.822435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.830246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.830294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.830324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.837989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.838030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.838055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.845360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.845418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.845448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.853112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.853169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.853201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.860645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.860692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.860722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.868399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.868447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.868499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.874986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.875030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.875057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.879920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.879962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.879987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.887674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.887722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.887751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.895146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.895199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.895229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.900500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.900546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.900585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.906298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.906345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.906375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.912060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.912124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.912154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.916616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.916661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.916691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.922077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.922139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.922165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.927029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.927071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.927097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.932498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.932544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.932574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.937946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.937988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.938016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.943964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.944005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.944033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.949313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.949372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.949403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.955449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.254 [2024-12-06 01:14:28.955495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.254 [2024-12-06 01:14:28.955525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.254 [2024-12-06 01:14:28.960094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.255 [2024-12-06 01:14:28.960154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.255 [2024-12-06 01:14:28.960183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.514 [2024-12-06 01:14:28.966796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.514 [2024-12-06 01:14:28.966868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.514 [2024-12-06 01:14:28.966896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.514 [2024-12-06 01:14:28.974733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.514 [2024-12-06 01:14:28.974780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.514 [2024-12-06 01:14:28.974810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.514 [2024-12-06 01:14:28.983544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.514 [2024-12-06 01:14:28.983592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.514 [2024-12-06 01:14:28.983622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.514 [2024-12-06 01:14:28.991110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.514 [2024-12-06 01:14:28.991173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.514 [2024-12-06 01:14:28.991203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:28.998148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:28.998195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:28.998224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.005285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.005333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.005380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.012539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.012585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.012615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.019892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.019933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.019957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.027246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.027293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.027322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.034591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.034638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.034667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.042012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.042055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.042081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.049466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.049513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.049543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.056806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.056876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.056902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.064134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.064197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.064227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.071389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.071445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.071476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.078711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.078758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.078787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.086279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.086327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.086357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.093408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.093455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.093485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.099862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.099903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.099928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.106159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.106206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.106235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.112460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.112506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.112535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.119280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.119327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.119357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.125910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.125952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.125985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.132283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.132330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.132359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.138677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.138723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.138752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.145164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.145224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.145253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.151757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.151831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.151877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.158577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.158624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.158654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.164303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.164345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.164372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.169755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.515 [2024-12-06 01:14:29.169797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.515 [2024-12-06 01:14:29.169832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.515 [2024-12-06 01:14:29.176782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.516 [2024-12-06 01:14:29.176848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.516 [2024-12-06 01:14:29.176874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.516 [2024-12-06 01:14:29.183697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.516 [2024-12-06 01:14:29.183745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.516 [2024-12-06 01:14:29.183770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.516 [2024-12-06 01:14:29.191544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.516 [2024-12-06 01:14:29.191592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.516 [2024-12-06 01:14:29.191621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.516 [2024-12-06 01:14:29.199549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.516 [2024-12-06 01:14:29.199597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.516 [2024-12-06 01:14:29.199627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.516 [2024-12-06 01:14:29.208036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.516 [2024-12-06 01:14:29.208078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.516 [2024-12-06 01:14:29.208122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.516 [2024-12-06 01:14:29.216531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.516 [2024-12-06 01:14:29.216579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.516 [2024-12-06 01:14:29.216609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.775 [2024-12-06 01:14:29.223441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.775 [2024-12-06 01:14:29.223491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.775 [2024-12-06 01:14:29.223522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.775 [2024-12-06 01:14:29.230617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.775 [2024-12-06 01:14:29.230665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.775 [2024-12-06 01:14:29.230694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.775 [2024-12-06 01:14:29.238190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.775 [2024-12-06 01:14:29.238237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.775 [2024-12-06 01:14:29.238267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.775 [2024-12-06 01:14:29.245897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.775 [2024-12-06 01:14:29.245943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.775 [2024-12-06 01:14:29.245985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.775 [2024-12-06 01:14:29.253431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.775 [2024-12-06 01:14:29.253478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.775 [2024-12-06 01:14:29.253507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.775 [2024-12-06 01:14:29.260933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.775 [2024-12-06 01:14:29.260975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.775 [2024-12-06 01:14:29.261016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.775 [2024-12-06 01:14:29.268338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.775 [2024-12-06 01:14:29.268385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.775 [2024-12-06 01:14:29.268414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.775 [2024-12-06 01:14:29.275360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.775 [2024-12-06 01:14:29.275407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.775 [2024-12-06 01:14:29.275437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.775 [2024-12-06 01:14:29.283497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.776 [2024-12-06 01:14:29.283544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.776 [2024-12-06 01:14:29.283574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.776 [2024-12-06 01:14:29.292213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.776 [2024-12-06 01:14:29.292262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.776 [2024-12-06 01:14:29.292292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.776 [2024-12-06 01:14:29.299629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.776 [2024-12-06 01:14:29.299675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.776 [2024-12-06 01:14:29.299704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.776 [2024-12-06 01:14:29.307018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.776 [2024-12-06 01:14:29.307060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.776 [2024-12-06 01:14:29.307101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.776 [2024-12-06 01:14:29.314313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.776 [2024-12-06 01:14:29.314373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.776 [2024-12-06 01:14:29.314404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.776 [2024-12-06 01:14:29.321653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.776 [2024-12-06 01:14:29.321699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.776 [2024-12-06 01:14:29.321729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.776 [2024-12-06 01:14:29.328956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.776 [2024-12-06 01:14:29.328998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.776 [2024-12-06 01:14:29.329025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.776 [2024-12-06 01:14:29.336328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.776 [2024-12-06 01:14:29.336376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.776 [2024-12-06 01:14:29.336406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.776 [2024-12-06 01:14:29.343664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.776 [2024-12-06 01:14:29.343710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.776 [2024-12-06 01:14:29.343739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.776 [2024-12-06 01:14:29.351167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.776 [2024-12-06 01:14:29.351217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.776 [2024-12-06 01:14:29.351246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.776 [2024-12-06 01:14:29.358411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.776 [2024-12-06 01:14:29.358459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.776 [2024-12-06 01:14:29.358488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.776 [2024-12-06 01:14:29.365683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.776 [2024-12-06 01:14:29.365730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.776 [2024-12-06 01:14:29.365760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.776 [2024-12-06 01:14:29.372858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.776 [2024-12-06 01:14:29.372920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.776 [2024-12-06 01:14:29.372962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.776 [2024-12-06 01:14:29.380232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.776 [2024-12-06 01:14:29.380279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.776 [2024-12-06 01:14:29.380309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.776 [2024-12-06 01:14:29.387607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.776 [2024-12-06 01:14:29.387652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.776 [2024-12-06 01:14:29.387682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.776 [2024-12-06 01:14:29.396274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.776 [2024-12-06 01:14:29.396323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.776 [2024-12-06 01:14:29.396352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.776 [2024-12-06 01:14:29.404942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.776 [2024-12-06 01:14:29.404985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.776 [2024-12-06 01:14:29.405012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.776 4649.50 IOPS, 581.19 MiB/s [2024-12-06T00:14:29.485Z] [2024-12-06 01:14:29.415896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.776 [2024-12-06 01:14:29.415939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.776 [2024-12-06 01:14:29.415965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.776 00:36:56.776 Latency(us) 00:36:56.776 [2024-12-06T00:14:29.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:56.776 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:56.776 nvme0n1 : 2.01 4645.55 580.69 0.00 0.00 3436.57 1116.54 12621.75 00:36:56.776 [2024-12-06T00:14:29.485Z] =================================================================================================================== 00:36:56.776 [2024-12-06T00:14:29.485Z] Total : 4645.55 580.69 0.00 0.00 3436.57 1116.54 12621.75 00:36:56.776 { 00:36:56.776 "results": [ 00:36:56.776 { 00:36:56.776 "job": "nvme0n1", 00:36:56.776 "core_mask": "0x2", 00:36:56.776 "workload": "randread", 00:36:56.776 "status": "finished", 00:36:56.776 "queue_depth": 16, 00:36:56.776 "io_size": 131072, 00:36:56.776 "runtime": 2.005143, 00:36:56.776 "iops": 4645.553957997011, 00:36:56.776 "mibps": 580.6942447496264, 00:36:56.776 "io_failed": 0, 00:36:56.776 "io_timeout": 0, 00:36:56.776 "avg_latency_us": 3436.571116120952, 00:36:56.776 "min_latency_us": 1116.5392592592593, 00:36:56.776 "max_latency_us": 12621.748148148148 00:36:56.776 } 00:36:56.776 ], 00:36:56.776 "core_count": 1 00:36:56.776 } 00:36:56.776 01:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:56.776 01:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:56.776 01:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:56.776 01:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:56.776 | .driver_specific 00:36:56.776 | .nvme_error 00:36:56.776 | .status_code 00:36:56.776 | .command_transient_transport_error' 00:36:57.035 01:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 301 > 0 )) 00:36:57.035 01:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3133413 00:36:57.035 01:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3133413 ']' 00:36:57.035 01:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3133413 00:36:57.035 01:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:57.035 01:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:57.035 01:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3133413 00:36:57.035 01:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:57.035 01:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:57.035 01:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3133413' 00:36:57.035 killing process with pid 3133413 00:36:57.035 01:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3133413 00:36:57.035 Received shutdown signal, test time was about 2.000000 seconds 00:36:57.035 00:36:57.035 Latency(us) 00:36:57.035 [2024-12-06T00:14:29.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:57.035 [2024-12-06T00:14:29.744Z] =================================================================================================================== 00:36:57.035 [2024-12-06T00:14:29.744Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:57.035 01:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3133413 00:36:58.411 01:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:58.411 01:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:58.411 01:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:58.411 01:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:58.411 01:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:58.411 01:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3134075 00:36:58.411 01:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:58.411 01:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3134075 /var/tmp/bperf.sock 00:36:58.411 01:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3134075 ']' 00:36:58.411 01:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:58.411 01:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:58.411 01:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:58.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:58.411 01:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:58.411 01:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:58.411 [2024-12-06 01:14:30.773238] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:36:58.411 [2024-12-06 01:14:30.773373] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3134075 ] 00:36:58.411 [2024-12-06 01:14:30.919053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:58.411 [2024-12-06 01:14:31.058944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:59.348 01:14:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:59.348 01:14:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:59.348 01:14:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:59.348 01:14:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:59.348 01:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:59.348 01:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.348 01:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:59.348 01:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.348 01:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:59.348 01:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:59.914 nvme0n1 00:36:59.914 01:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:59.914 01:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.914 01:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:59.914 01:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.914 01:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:59.914 01:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:59.914 Running I/O for 2 seconds... 00:37:00.174 [2024-12-06 01:14:32.641634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e2c28 00:37:00.174 [2024-12-06 01:14:32.644282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.174 [2024-12-06 01:14:32.644356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:00.174 [2024-12-06 01:14:32.657834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2510 00:37:00.174 [2024-12-06 01:14:32.659371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.174 [2024-12-06 01:14:32.659434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:00.174 [2024-12-06 01:14:32.676122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e99d8 00:37:00.174 [2024-12-06 01:14:32.678505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.174 [2024-12-06 01:14:32.678555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:00.174 [2024-12-06 01:14:32.691490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7970 00:37:00.174 [2024-12-06 01:14:32.693208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.174 [2024-12-06 01:14:32.693266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:00.174 [2024-12-06 01:14:32.706993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6738 00:37:00.174 [2024-12-06 01:14:32.708619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.174 [2024-12-06 01:14:32.708676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:37:00.174 [2024-12-06 01:14:32.724477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa7d8 00:37:00.174 [2024-12-06 01:14:32.725780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.174 [2024-12-06 01:14:32.725829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:00.174 [2024-12-06 01:14:32.743285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0ea0 00:37:00.174 [2024-12-06 01:14:32.745953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.174 [2024-12-06 01:14:32.746007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:00.174 [2024-12-06 01:14:32.754576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:37:00.174 [2024-12-06 01:14:32.755800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.174 [2024-12-06 01:14:32.755872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:00.174 [2024-12-06 01:14:32.771278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:37:00.174 [2024-12-06 01:14:32.772271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.174 [2024-12-06 01:14:32.772310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:00.174 [2024-12-06 01:14:32.790030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0788 00:37:00.174 [2024-12-06 01:14:32.792470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.174 [2024-12-06 01:14:32.792524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:00.174 [2024-12-06 01:14:32.806163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9e10 00:37:00.174 [2024-12-06 01:14:32.808632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.174 [2024-12-06 01:14:32.808676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:00.174 [2024-12-06 01:14:32.817300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa3a0 00:37:00.174 [2024-12-06 01:14:32.818489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.174 [2024-12-06 01:14:32.818532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:37:00.174 [2024-12-06 01:14:32.837353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0ff8 00:37:00.174 [2024-12-06 01:14:32.839604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.174 [2024-12-06 01:14:32.839647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:00.174 [2024-12-06 01:14:32.853839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0ea0 00:37:00.174 [2024-12-06 01:14:32.856089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.174 [2024-12-06 01:14:32.856131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:00.174 [2024-12-06 01:14:32.867948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ea248 00:37:00.174 [2024-12-06 01:14:32.870641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.174 [2024-12-06 01:14:32.870684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:00.435 [2024-12-06 01:14:32.884912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f92c0 00:37:00.435 [2024-12-06 01:14:32.886318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.435 [2024-12-06 01:14:32.886361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:00.435 [2024-12-06 01:14:32.901562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fda78 00:37:00.435 [2024-12-06 01:14:32.903611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.435 [2024-12-06 01:14:32.903654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:37:00.435 [2024-12-06 01:14:32.916850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6738 00:37:00.435 [2024-12-06 01:14:32.918714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.435 [2024-12-06 01:14:32.918758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:00.435 [2024-12-06 01:14:32.933458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6fa8 00:37:00.435 [2024-12-06 01:14:32.935113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.435 [2024-12-06 01:14:32.935156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:00.435 [2024-12-06 01:14:32.949845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3a28 00:37:00.435 [2024-12-06 01:14:32.950913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.435 [2024-12-06 01:14:32.950956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:00.435 [2024-12-06 01:14:32.966376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f31b8 00:37:00.435 [2024-12-06 01:14:32.968059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.435 [2024-12-06 01:14:32.968102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.435 [2024-12-06 01:14:32.982692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc560 00:37:00.435 [2024-12-06 01:14:32.983753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.435 [2024-12-06 01:14:32.983796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:00.435 [2024-12-06 01:14:33.000963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fda78 00:37:00.435 [2024-12-06 01:14:33.003284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.435 [2024-12-06 01:14:33.003328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:00.435 [2024-12-06 01:14:33.015785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e88f8 00:37:00.435 [2024-12-06 01:14:33.017453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.435 [2024-12-06 01:14:33.017496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:00.435 [2024-12-06 01:14:33.030209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4578 00:37:00.435 [2024-12-06 01:14:33.032678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.435 [2024-12-06 01:14:33.032722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.435 [2024-12-06 01:14:33.046362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6458 00:37:00.435 [2024-12-06 01:14:33.048243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.435 [2024-12-06 01:14:33.048286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.435 [2024-12-06 01:14:33.062064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1b48 00:37:00.435 [2024-12-06 01:14:33.063528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.435 [2024-12-06 01:14:33.063571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.435 [2024-12-06 01:14:33.077170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e88f8 00:37:00.435 [2024-12-06 01:14:33.078630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.435 [2024-12-06 01:14:33.078673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:37:00.435 [2024-12-06 01:14:33.093319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2948 00:37:00.435 [2024-12-06 01:14:33.094796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.435 [2024-12-06 01:14:33.094856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:00.435 [2024-12-06 01:14:33.111938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:37:00.435 [2024-12-06 01:14:33.114052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.435 [2024-12-06 01:14:33.114095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:00.435 [2024-12-06 01:14:33.127701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6300 00:37:00.435 [2024-12-06 01:14:33.130036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.435 [2024-12-06 01:14:33.130078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:00.436 [2024-12-06 01:14:33.139503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dfdc0 00:37:00.436 [2024-12-06 01:14:33.140546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.436 [2024-12-06 01:14:33.140588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:00.696 [2024-12-06 01:14:33.157119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7538 00:37:00.696 [2024-12-06 01:14:33.158393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.696 [2024-12-06 01:14:33.158437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:00.696 [2024-12-06 01:14:33.171951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8088 00:37:00.696 [2024-12-06 01:14:33.173219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.696 [2024-12-06 01:14:33.173262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:37:00.696 [2024-12-06 01:14:33.188086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec408 00:37:00.696 [2024-12-06 01:14:33.189354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.696 [2024-12-06 01:14:33.189397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:37:00.696 [2024-12-06 01:14:33.206872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ecc78 00:37:00.696 [2024-12-06 01:14:33.208794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.696 [2024-12-06 01:14:33.208844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:00.696 [2024-12-06 01:14:33.220021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e23b8 00:37:00.696 [2024-12-06 01:14:33.221035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.696 [2024-12-06 01:14:33.221077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:00.696 [2024-12-06 01:14:33.236292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173feb58 00:37:00.696 [2024-12-06 01:14:33.237149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.696 [2024-12-06 01:14:33.237192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:00.696 [2024-12-06 01:14:33.252140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e23b8 00:37:00.696 [2024-12-06 01:14:33.253414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.696 [2024-12-06 01:14:33.253456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:37:00.696 [2024-12-06 01:14:33.272274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7970 00:37:00.696 [2024-12-06 01:14:33.274793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.696 [2024-12-06 01:14:33.274844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:00.696 [2024-12-06 01:14:33.283546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4140 00:37:00.696 [2024-12-06 01:14:33.284576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.696 [2024-12-06 01:14:33.284618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.696 [2024-12-06 01:14:33.300138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df118 00:37:00.697 [2024-12-06 01:14:33.301618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.697 [2024-12-06 01:14:33.301660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:37:00.697 [2024-12-06 01:14:33.316437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4298 00:37:00.697 [2024-12-06 01:14:33.317323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.697 [2024-12-06 01:14:33.317366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:00.697 [2024-12-06 01:14:33.332783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0ea0 00:37:00.697 [2024-12-06 01:14:33.334282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.697 [2024-12-06 01:14:33.334325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:37:00.697 [2024-12-06 01:14:33.352687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f20d8 00:37:00.697 [2024-12-06 01:14:33.355088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.697 [2024-12-06 01:14:33.355130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:00.697 [2024-12-06 01:14:33.364474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7c50 00:37:00.697 [2024-12-06 01:14:33.365589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.697 [2024-12-06 01:14:33.365639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:00.697 [2024-12-06 01:14:33.382110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6020 00:37:00.697 [2024-12-06 01:14:33.383445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.697 [2024-12-06 01:14:33.383487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:00.697 [2024-12-06 01:14:33.399812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1710 00:37:00.697 [2024-12-06 01:14:33.401991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.697 [2024-12-06 01:14:33.402033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:00.956 [2024-12-06 01:14:33.415963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3d08 00:37:00.956 [2024-12-06 01:14:33.418135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.956 [2024-12-06 01:14:33.418178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:00.956 [2024-12-06 01:14:33.427793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1430 00:37:00.956 [2024-12-06 01:14:33.428884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.956 [2024-12-06 01:14:33.428926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:00.956 [2024-12-06 01:14:33.448716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ddc00 00:37:00.956 [2024-12-06 01:14:33.451108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.956 [2024-12-06 01:14:33.451151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:00.956 [2024-12-06 01:14:33.460538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc128 00:37:00.956 [2024-12-06 01:14:33.461669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.956 [2024-12-06 01:14:33.461712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:00.956 [2024-12-06 01:14:33.480072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df988 00:37:00.956 [2024-12-06 01:14:33.481855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.956 [2024-12-06 01:14:33.481898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:00.956 [2024-12-06 01:14:33.498141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4578 00:37:00.956 [2024-12-06 01:14:33.500757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.956 [2024-12-06 01:14:33.500799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:00.956 [2024-12-06 01:14:33.509936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0ea0 00:37:00.956 [2024-12-06 01:14:33.511290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.956 [2024-12-06 01:14:33.511331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:37:00.956 [2024-12-06 01:14:33.527623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e88f8 00:37:00.956 [2024-12-06 01:14:33.529216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.956 [2024-12-06 01:14:33.529259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:00.956 [2024-12-06 01:14:33.545451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2d80 00:37:00.956 [2024-12-06 01:14:33.547850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.956 [2024-12-06 01:14:33.547894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:00.956 [2024-12-06 01:14:33.560294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef6a8 00:37:00.956 [2024-12-06 01:14:33.562013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.956 [2024-12-06 01:14:33.562056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:00.956 [2024-12-06 01:14:33.577865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9168 00:37:00.956 [2024-12-06 01:14:33.580468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.956 [2024-12-06 01:14:33.580510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:00.956 [2024-12-06 01:14:33.589137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc998 00:37:00.956 [2024-12-06 01:14:33.590250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.956 [2024-12-06 01:14:33.590292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:00.956 [2024-12-06 01:14:33.604180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e01f8 00:37:00.956 [2024-12-06 01:14:33.605295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.956 [2024-12-06 01:14:33.605337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:00.956 [2024-12-06 01:14:33.621812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebb98 00:37:00.956 [2024-12-06 01:14:33.624857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.956 [2024-12-06 01:14:33.624900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:00.956 15753.00 IOPS, 61.54 MiB/s [2024-12-06T00:14:33.665Z] [2024-12-06 01:14:33.636607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f5be8 00:37:00.956 [2024-12-06 01:14:33.637944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.956 [2024-12-06 01:14:33.637993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:37:00.956 [2024-12-06 01:14:33.656753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f46d0 00:37:00.956 [2024-12-06 01:14:33.659196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.956 [2024-12-06 01:14:33.659240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:01.215 [2024-12-06 01:14:33.672264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f9b30 00:37:01.215 [2024-12-06 01:14:33.673842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.215 [2024-12-06 01:14:33.673885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:01.215 [2024-12-06 01:14:33.687076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f46d0 00:37:01.215 [2024-12-06 01:14:33.688652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.215 [2024-12-06 01:14:33.688694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:37:01.215 [2024-12-06 01:14:33.704909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7100 00:37:01.215 [2024-12-06 01:14:33.706697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.215 [2024-12-06 01:14:33.706741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:01.215 [2024-12-06 01:14:33.720153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4298 00:37:01.215 [2024-12-06 01:14:33.721934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.215 [2024-12-06 01:14:33.721978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:37:01.215 [2024-12-06 01:14:33.735156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:37:01.215 [2024-12-06 01:14:33.736281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.215 [2024-12-06 01:14:33.736325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:01.215 [2024-12-06 01:14:33.751333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173feb58 00:37:01.215 [2024-12-06 01:14:33.752268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.215 [2024-12-06 01:14:33.752312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:01.215 [2024-12-06 01:14:33.769641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ecc78 00:37:01.215 [2024-12-06 01:14:33.771620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.215 [2024-12-06 01:14:33.771663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:01.215 [2024-12-06 01:14:33.783156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f96f8 00:37:01.215 [2024-12-06 01:14:33.784077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.215 [2024-12-06 01:14:33.784119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:01.215 [2024-12-06 01:14:33.798406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3d08 00:37:01.215 [2024-12-06 01:14:33.799737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.215 [2024-12-06 01:14:33.799779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:37:01.215 [2024-12-06 01:14:33.818421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4140 00:37:01.215 [2024-12-06 01:14:33.820625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.215 [2024-12-06 01:14:33.820667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:01.215 [2024-12-06 01:14:33.833323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f20d8 00:37:01.215 [2024-12-06 01:14:33.834888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.215 [2024-12-06 01:14:33.834931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:01.215 [2024-12-06 01:14:33.849496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fef90 00:37:01.215 [2024-12-06 01:14:33.850895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.215 [2024-12-06 01:14:33.850937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:01.215 [2024-12-06 01:14:33.867808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4de8 00:37:01.215 [2024-12-06 01:14:33.870453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.215 [2024-12-06 01:14:33.870495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:01.215 [2024-12-06 01:14:33.879629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7818 00:37:01.215 [2024-12-06 01:14:33.881003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.215 [2024-12-06 01:14:33.881045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:37:01.215 [2024-12-06 01:14:33.895962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3d08 00:37:01.215 [2024-12-06 01:14:33.897323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.215 [2024-12-06 01:14:33.897365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:37:01.215 [2024-12-06 01:14:33.916141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1f80 00:37:01.215 [2024-12-06 01:14:33.918622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.215 [2024-12-06 01:14:33.918665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:01.474 [2024-12-06 01:14:33.931365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6fa8 00:37:01.474 [2024-12-06 01:14:33.933209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.474 [2024-12-06 01:14:33.933252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:01.474 [2024-12-06 01:14:33.948178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6738 00:37:01.474 [2024-12-06 01:14:33.950457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.474 [2024-12-06 01:14:33.950500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:01.474 [2024-12-06 01:14:33.963516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df988 00:37:01.474 [2024-12-06 01:14:33.965640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.474 [2024-12-06 01:14:33.965683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:01.474 [2024-12-06 01:14:33.980198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e95a0 00:37:01.474 [2024-12-06 01:14:33.982071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.474 [2024-12-06 01:14:33.982121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:37:01.474 [2024-12-06 01:14:33.995213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fcdd0 00:37:01.474 [2024-12-06 01:14:33.996427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.474 [2024-12-06 01:14:33.996471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:01.474 [2024-12-06 01:14:34.012988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f5be8 00:37:01.474 [2024-12-06 01:14:34.015061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.474 [2024-12-06 01:14:34.015111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:01.474 [2024-12-06 01:14:34.027839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4b08 00:37:01.474 [2024-12-06 01:14:34.029255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.474 [2024-12-06 01:14:34.029297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:01.475 [2024-12-06 01:14:34.045522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f57b0 00:37:01.475 [2024-12-06 01:14:34.047801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.475 [2024-12-06 01:14:34.047854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:01.475 [2024-12-06 01:14:34.058288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dece0 00:37:01.475 [2024-12-06 01:14:34.059704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.475 [2024-12-06 01:14:34.059754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:01.475 [2024-12-06 01:14:34.075950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f5be8 00:37:01.475 [2024-12-06 01:14:34.077588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.475 [2024-12-06 01:14:34.077631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:01.475 [2024-12-06 01:14:34.092256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8618 00:37:01.475 [2024-12-06 01:14:34.094104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.475 [2024-12-06 01:14:34.094146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:01.475 [2024-12-06 01:14:34.108343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e95a0 00:37:01.475 [2024-12-06 01:14:34.110205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.475 [2024-12-06 01:14:34.110247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:01.475 [2024-12-06 01:14:34.124647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eee38 00:37:01.475 [2024-12-06 01:14:34.126709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.475 [2024-12-06 01:14:34.126752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:01.475 [2024-12-06 01:14:34.137715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6fa8 00:37:01.475 [2024-12-06 01:14:34.138906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.475 [2024-12-06 01:14:34.138948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:01.475 [2024-12-06 01:14:34.154002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:37:01.475 [2024-12-06 01:14:34.155000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.475 [2024-12-06 01:14:34.155043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:01.475 [2024-12-06 01:14:34.169870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6fa8 00:37:01.475 [2024-12-06 01:14:34.171267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.475 [2024-12-06 01:14:34.171310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:01.733 [2024-12-06 01:14:34.186030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3498 00:37:01.733 [2024-12-06 01:14:34.187450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.733 [2024-12-06 01:14:34.187493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:01.733 [2024-12-06 01:14:34.202429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee5c8 00:37:01.733 [2024-12-06 01:14:34.204046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.733 [2024-12-06 01:14:34.204090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:01.733 [2024-12-06 01:14:34.217503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e88f8 00:37:01.733 [2024-12-06 01:14:34.219118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.733 [2024-12-06 01:14:34.219161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:37:01.733 [2024-12-06 01:14:34.235190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb760 00:37:01.733 [2024-12-06 01:14:34.237036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.733 [2024-12-06 01:14:34.237079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:01.733 [2024-12-06 01:14:34.249852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1710 00:37:01.733 [2024-12-06 01:14:34.252458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.733 [2024-12-06 01:14:34.252501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:01.733 [2024-12-06 01:14:34.266007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6b70 00:37:01.733 [2024-12-06 01:14:34.268029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.733 [2024-12-06 01:14:34.268072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:01.733 [2024-12-06 01:14:34.281731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7970 00:37:01.733 [2024-12-06 01:14:34.283343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.733 [2024-12-06 01:14:34.283385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:01.733 [2024-12-06 01:14:34.296823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f5378 00:37:01.733 [2024-12-06 01:14:34.298413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.733 [2024-12-06 01:14:34.298454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:37:01.733 [2024-12-06 01:14:34.312994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e84c0 00:37:01.733 [2024-12-06 01:14:34.314836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.733 [2024-12-06 01:14:34.314878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:01.733 [2024-12-06 01:14:34.328994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fda78 00:37:01.734 [2024-12-06 01:14:34.330389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.734 [2024-12-06 01:14:34.330439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:01.734 [2024-12-06 01:14:34.347076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e38d0 00:37:01.734 [2024-12-06 01:14:34.349290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.734 [2024-12-06 01:14:34.349333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:01.734 [2024-12-06 01:14:34.361876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de8a8 00:37:01.734 [2024-12-06 01:14:34.363458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.734 [2024-12-06 01:14:34.363500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:01.734 [2024-12-06 01:14:34.377507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaab8 00:37:01.734 [2024-12-06 01:14:34.379095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.734 [2024-12-06 01:14:34.379138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:01.734 [2024-12-06 01:14:34.393765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e88f8 00:37:01.734 [2024-12-06 01:14:34.395174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.734 [2024-12-06 01:14:34.395217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:01.734 [2024-12-06 01:14:34.409964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de038 00:37:01.734 [2024-12-06 01:14:34.411751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.734 [2024-12-06 01:14:34.411793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:01.734 [2024-12-06 01:14:34.424599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3a28 00:37:01.734 [2024-12-06 01:14:34.427167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.734 [2024-12-06 01:14:34.427209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:01.734 [2024-12-06 01:14:34.439399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:01.734 [2024-12-06 01:14:34.440573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.734 [2024-12-06 01:14:34.440616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:01.992 [2024-12-06 01:14:34.454317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dfdc0 00:37:01.993 [2024-12-06 01:14:34.455483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.993 [2024-12-06 01:14:34.455525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:01.993 [2024-12-06 01:14:34.473761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:01.993 [2024-12-06 01:14:34.475585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.993 [2024-12-06 01:14:34.475629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:01.993 [2024-12-06 01:14:34.489600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de470 00:37:01.993 [2024-12-06 01:14:34.491630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.993 [2024-12-06 01:14:34.491672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:01.993 [2024-12-06 01:14:34.504416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5a90 00:37:01.993 [2024-12-06 01:14:34.505791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.993 [2024-12-06 01:14:34.505843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:01.993 [2024-12-06 01:14:34.524144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4140 00:37:01.993 [2024-12-06 01:14:34.526809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.993 [2024-12-06 01:14:34.526860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:01.993 [2024-12-06 01:14:34.535416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaab8 00:37:01.993 [2024-12-06 01:14:34.536611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.993 [2024-12-06 01:14:34.536653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:01.993 [2024-12-06 01:14:34.550453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de038 00:37:01.993 [2024-12-06 01:14:34.551630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.993 [2024-12-06 01:14:34.551672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:01.993 [2024-12-06 01:14:34.568151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ea680 00:37:01.993 [2024-12-06 01:14:34.569556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.993 [2024-12-06 01:14:34.569599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:01.993 [2024-12-06 01:14:34.584458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0bc0 00:37:01.993 [2024-12-06 01:14:34.586056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.993 [2024-12-06 01:14:34.586099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:01.993 [2024-12-06 01:14:34.600552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0788 00:37:01.993 [2024-12-06 01:14:34.602177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.993 [2024-12-06 01:14:34.602220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:01.993 [2024-12-06 01:14:34.617353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eee38 00:37:01.993 [2024-12-06 01:14:34.619408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.993 [2024-12-06 01:14:34.619450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:01.993 15857.00 IOPS, 61.94 MiB/s 00:37:01.993 Latency(us) 00:37:01.993 [2024-12-06T00:14:34.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:01.993 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:01.993 nvme0n1 : 2.01 15872.26 62.00 0.00 0.00 8054.02 3810.80 21165.70 00:37:01.993 [2024-12-06T00:14:34.702Z] =================================================================================================================== 00:37:01.993 [2024-12-06T00:14:34.702Z] Total : 15872.26 62.00 0.00 0.00 8054.02 3810.80 21165.70 00:37:01.993 { 00:37:01.993 "results": [ 00:37:01.993 { 00:37:01.993 "job": "nvme0n1", 00:37:01.993 "core_mask": "0x2", 00:37:01.993 "workload": "randwrite", 00:37:01.993 "status": "finished", 00:37:01.993 "queue_depth": 128, 00:37:01.993 "io_size": 4096, 00:37:01.993 "runtime": 2.006142, 00:37:01.993 "iops": 15872.256300899937, 00:37:01.993 "mibps": 62.00100117539038, 00:37:01.993 "io_failed": 0, 00:37:01.993 "io_timeout": 0, 00:37:01.993 "avg_latency_us": 8054.023856867357, 00:37:01.993 "min_latency_us": 3810.797037037037, 00:37:01.993 "max_latency_us": 21165.70074074074 00:37:01.993 } 00:37:01.993 ], 00:37:01.993 "core_count": 1 00:37:01.993 } 00:37:01.993 01:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:01.993 01:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:01.993 01:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:01.993 01:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:01.993 | .driver_specific 00:37:01.993 | .nvme_error 00:37:01.993 | .status_code 00:37:01.993 | .command_transient_transport_error' 00:37:02.252 01:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 124 > 0 )) 00:37:02.252 01:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3134075 00:37:02.252 01:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3134075 ']' 00:37:02.252 01:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3134075 00:37:02.252 01:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:02.252 01:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:02.252 01:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3134075 00:37:02.512 01:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:02.512 01:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:02.512 01:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3134075' 00:37:02.512 killing process with pid 3134075 00:37:02.512 01:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3134075 00:37:02.512 Received shutdown signal, test time was about 2.000000 seconds 00:37:02.512 00:37:02.512 Latency(us) 00:37:02.512 [2024-12-06T00:14:35.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:02.512 [2024-12-06T00:14:35.221Z] =================================================================================================================== 00:37:02.512 [2024-12-06T00:14:35.221Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:02.512 01:14:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3134075 00:37:03.449 01:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:37:03.449 01:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:03.449 01:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:03.449 01:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:03.449 01:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:03.449 01:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3134626 00:37:03.449 01:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:37:03.449 01:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3134626 /var/tmp/bperf.sock 00:37:03.449 01:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3134626 ']' 00:37:03.449 01:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:03.449 01:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:03.449 01:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:03.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:03.449 01:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:03.449 01:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:03.449 [2024-12-06 01:14:36.020504] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:37:03.449 [2024-12-06 01:14:36.020629] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3134626 ] 00:37:03.449 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:03.449 Zero copy mechanism will not be used. 00:37:03.708 [2024-12-06 01:14:36.162810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:03.709 [2024-12-06 01:14:36.284601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:04.642 01:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:04.642 01:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:04.642 01:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:04.642 01:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:04.642 01:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:04.642 01:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.642 01:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:04.642 01:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.642 01:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:04.642 01:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:05.211 nvme0n1 00:37:05.211 01:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:05.211 01:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.211 01:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:05.211 01:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.211 01:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:05.211 01:14:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:05.211 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:05.211 Zero copy mechanism will not be used. 00:37:05.211 Running I/O for 2 seconds... 00:37:05.211 [2024-12-06 01:14:37.903478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.211 [2024-12-06 01:14:37.903638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.211 [2024-12-06 01:14:37.903695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.211 [2024-12-06 01:14:37.912046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.211 [2024-12-06 01:14:37.912217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.211 [2024-12-06 01:14:37.912265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:37.919691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:37.919858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:37.919915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:37.927242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:37.927392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:37.927436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:37.934774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:37.934942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:37.934982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:37.942204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:37.942354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:37.942398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:37.949580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:37.949738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:37.949781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:37.957038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:37.957189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:37.957232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:37.964419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:37.964566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:37.964610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:37.971692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:37.971840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:37.971898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:37.979056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:37.979224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:37.979267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:37.986457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:37.986604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:37.986648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:37.993777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:37.993947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:37.993986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.001045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.001201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.001244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.008296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.008442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.008486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.015404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.015545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.015588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.022891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.023026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.023065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.030147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.030304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.030347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.037469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.037619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.037661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.044896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.045024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.045063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.052031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.052168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.052211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.059401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.059553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.059597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.066738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.066897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.066936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.074218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.074366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.074420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.081537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.081662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.081704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.088785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.088969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.089008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.096337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.096568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.096611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.104876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.105028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.105067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.113573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.113779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.113830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.122248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.122371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.122415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.130446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.130670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.130713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.139344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.139547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.139590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.148284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.148427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.148471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.157278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.157456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.157499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.166046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.166165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.166223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.472 [2024-12-06 01:14:38.173865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.472 [2024-12-06 01:14:38.174064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.472 [2024-12-06 01:14:38.174103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.732 [2024-12-06 01:14:38.181397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.732 [2024-12-06 01:14:38.181620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.732 [2024-12-06 01:14:38.181663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.732 [2024-12-06 01:14:38.188954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.732 [2024-12-06 01:14:38.189180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.732 [2024-12-06 01:14:38.189224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.732 [2024-12-06 01:14:38.196434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.732 [2024-12-06 01:14:38.196632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.732 [2024-12-06 01:14:38.196675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.732 [2024-12-06 01:14:38.204782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.732 [2024-12-06 01:14:38.204923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.732 [2024-12-06 01:14:38.204962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.732 [2024-12-06 01:14:38.212205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.732 [2024-12-06 01:14:38.212346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.732 [2024-12-06 01:14:38.212401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.732 [2024-12-06 01:14:38.219500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.732 [2024-12-06 01:14:38.219645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.732 [2024-12-06 01:14:38.219688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.732 [2024-12-06 01:14:38.226956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.732 [2024-12-06 01:14:38.227067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.732 [2024-12-06 01:14:38.227106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.732 [2024-12-06 01:14:38.234896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.732 [2024-12-06 01:14:38.235073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.732 [2024-12-06 01:14:38.235127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.732 [2024-12-06 01:14:38.243356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.732 [2024-12-06 01:14:38.243483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.732 [2024-12-06 01:14:38.243527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.732 [2024-12-06 01:14:38.251559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.732 [2024-12-06 01:14:38.251672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.732 [2024-12-06 01:14:38.251715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.732 [2024-12-06 01:14:38.259623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.732 [2024-12-06 01:14:38.259739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.732 [2024-12-06 01:14:38.259781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.732 [2024-12-06 01:14:38.267781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.732 [2024-12-06 01:14:38.267958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.732 [2024-12-06 01:14:38.267998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.732 [2024-12-06 01:14:38.275418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.732 [2024-12-06 01:14:38.275542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.732 [2024-12-06 01:14:38.275601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.732 [2024-12-06 01:14:38.283218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.732 [2024-12-06 01:14:38.283350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.732 [2024-12-06 01:14:38.283392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.732 [2024-12-06 01:14:38.291112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.732 [2024-12-06 01:14:38.291281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.732 [2024-12-06 01:14:38.291325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.732 [2024-12-06 01:14:38.298406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.732 [2024-12-06 01:14:38.298534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.732 [2024-12-06 01:14:38.298576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.732 [2024-12-06 01:14:38.305963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.732 [2024-12-06 01:14:38.306063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.732 [2024-12-06 01:14:38.306103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.732 [2024-12-06 01:14:38.313387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.732 [2024-12-06 01:14:38.313519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.733 [2024-12-06 01:14:38.313561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.733 [2024-12-06 01:14:38.320634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.733 [2024-12-06 01:14:38.320745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.733 [2024-12-06 01:14:38.320786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.733 [2024-12-06 01:14:38.327989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.733 [2024-12-06 01:14:38.328089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.733 [2024-12-06 01:14:38.328128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.733 [2024-12-06 01:14:38.335475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.733 [2024-12-06 01:14:38.335581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.733 [2024-12-06 01:14:38.335629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.733 [2024-12-06 01:14:38.342764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.733 [2024-12-06 01:14:38.342908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.733 [2024-12-06 01:14:38.342947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.733 [2024-12-06 01:14:38.350091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.733 [2024-12-06 01:14:38.350220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.733 [2024-12-06 01:14:38.350263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.733 [2024-12-06 01:14:38.357522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.733 [2024-12-06 01:14:38.357628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.733 [2024-12-06 01:14:38.357669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.733 [2024-12-06 01:14:38.364964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.733 [2024-12-06 01:14:38.365064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.733 [2024-12-06 01:14:38.365103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.733 [2024-12-06 01:14:38.372401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.733 [2024-12-06 01:14:38.372524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.733 [2024-12-06 01:14:38.372567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.733 [2024-12-06 01:14:38.379793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.733 [2024-12-06 01:14:38.379945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.733 [2024-12-06 01:14:38.379983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.733 [2024-12-06 01:14:38.387087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.733 [2024-12-06 01:14:38.387222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.733 [2024-12-06 01:14:38.387265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.733 [2024-12-06 01:14:38.394686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.733 [2024-12-06 01:14:38.394800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.733 [2024-12-06 01:14:38.394868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.733 [2024-12-06 01:14:38.402050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.733 [2024-12-06 01:14:38.402150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.733 [2024-12-06 01:14:38.402188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.733 [2024-12-06 01:14:38.409350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.733 [2024-12-06 01:14:38.409486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.733 [2024-12-06 01:14:38.409527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.733 [2024-12-06 01:14:38.416539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.733 [2024-12-06 01:14:38.416668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.733 [2024-12-06 01:14:38.416711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.733 [2024-12-06 01:14:38.423616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.733 [2024-12-06 01:14:38.423771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.733 [2024-12-06 01:14:38.423823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.733 [2024-12-06 01:14:38.431234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.733 [2024-12-06 01:14:38.431365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.733 [2024-12-06 01:14:38.431408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.733 [2024-12-06 01:14:38.438700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.733 [2024-12-06 01:14:38.438833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.733 [2024-12-06 01:14:38.438889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.991 [2024-12-06 01:14:38.446430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.991 [2024-12-06 01:14:38.446555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.991 [2024-12-06 01:14:38.446598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.991 [2024-12-06 01:14:38.453923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.991 [2024-12-06 01:14:38.454051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.991 [2024-12-06 01:14:38.454101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.991 [2024-12-06 01:14:38.461041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.991 [2024-12-06 01:14:38.461161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.991 [2024-12-06 01:14:38.461199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.991 [2024-12-06 01:14:38.468382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.991 [2024-12-06 01:14:38.468504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.468546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.475845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.475970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.476009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.483272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.483392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.483435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.490918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.491016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.491055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.498806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.499012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.499051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.507321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.507509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.507553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.514705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.514920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.514959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.522049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.522200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.522239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.529253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.529455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.529497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.536586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.536784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.536868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.543893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.544113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.544151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.551116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.551305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.551347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.558364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.558582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.558625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.565704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.565876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.565915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.573040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.573222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.573266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.580351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.580542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.580586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.587571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.587834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.587888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.594992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.595155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.595205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.602225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.602425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.602468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.609439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.609647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.609690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.616557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.616760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.616803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.624416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.624617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.624660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.632521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.632633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.632675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.640031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.640134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.640173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.647262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.647409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.647452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.654967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.655071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.992 [2024-12-06 01:14:38.655110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.992 [2024-12-06 01:14:38.662203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.992 [2024-12-06 01:14:38.662350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.993 [2024-12-06 01:14:38.662401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.993 [2024-12-06 01:14:38.669652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.993 [2024-12-06 01:14:38.669777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.993 [2024-12-06 01:14:38.669831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.993 [2024-12-06 01:14:38.676997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.993 [2024-12-06 01:14:38.677110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.993 [2024-12-06 01:14:38.677149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.993 [2024-12-06 01:14:38.684526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.993 [2024-12-06 01:14:38.684637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.993 [2024-12-06 01:14:38.684678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.993 [2024-12-06 01:14:38.691977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:05.993 [2024-12-06 01:14:38.692073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.993 [2024-12-06 01:14:38.692112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.699527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.699646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.699697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.706684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.706837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.706894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.713881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.713984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.714022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.721532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.721647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.721689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.729029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.729139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.729178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.736329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.736446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.736488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.743751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.743896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.743935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.751142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.751275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.751322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.758473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.758587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.758630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.766131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.766281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.766324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.773413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.773522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.773567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.780994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.781096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.781134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.788420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.788541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.788583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.795883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.796001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.796040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.803073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.803233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.803276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.810301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.810439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.810481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.817430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.817573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.817616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.824605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.824728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.824770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.831626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.831839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.831895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.838261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.838703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.838745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.845293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.845659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.845701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.852286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.852655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.852706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.859148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.859479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.859522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.865507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.865808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.865872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.871657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.871997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.872035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.877946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.878193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.878231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.884234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.884531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.884574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.890718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.891025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.891064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.897428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.897716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.897759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.251 4134.00 IOPS, 516.75 MiB/s [2024-12-06T00:14:38.960Z] [2024-12-06 01:14:38.906154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.906271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.906313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.913454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.913582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.913623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.920496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.920608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.920650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.927796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.927970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.928009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.935291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.935491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.935533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.943135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.943245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.943288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.251 [2024-12-06 01:14:38.951378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.251 [2024-12-06 01:14:38.951495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.251 [2024-12-06 01:14:38.951537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.510 [2024-12-06 01:14:38.959100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.510 [2024-12-06 01:14:38.959240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.510 [2024-12-06 01:14:38.959283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.510 [2024-12-06 01:14:38.967026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.510 [2024-12-06 01:14:38.967161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.510 [2024-12-06 01:14:38.967200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.510 [2024-12-06 01:14:38.974320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.510 [2024-12-06 01:14:38.974439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.510 [2024-12-06 01:14:38.974491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.510 [2024-12-06 01:14:38.981680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.510 [2024-12-06 01:14:38.981807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.510 [2024-12-06 01:14:38.981874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.510 [2024-12-06 01:14:38.989002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.510 [2024-12-06 01:14:38.989109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.510 [2024-12-06 01:14:38.989147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.510 [2024-12-06 01:14:38.996417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.510 [2024-12-06 01:14:38.996530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.510 [2024-12-06 01:14:38.996573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.003974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.004085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.004124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.011518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.011635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.011677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.019033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.019140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.019179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.026290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.026411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.026454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.033461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.033575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.033617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.040947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.041049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.041088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.048244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.048354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.048396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.055591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.055717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.055761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.062799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.062947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.062985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.069885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.070010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.070049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.076913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.077033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.077072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.084001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.084122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.084160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.091389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.091538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.091580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.098614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.098765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.098825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.105937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.106058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.106096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.113097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.113257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.113299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.120438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.120586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.120629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.127564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.127694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.127737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.134767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.134932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.134970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.142023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.142158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.142197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.149283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.149400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.149444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.156494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.156629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.156671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.163665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.163794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.163860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.171297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.171413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.171454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.178811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.178939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.178978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.186352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.186460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.186503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.193884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.194008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.194046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.201783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.201926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.201964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.209151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.209270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.209312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.511 [2024-12-06 01:14:39.216664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.511 [2024-12-06 01:14:39.216780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.511 [2024-12-06 01:14:39.216831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.811 [2024-12-06 01:14:39.224267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.811 [2024-12-06 01:14:39.224414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.811 [2024-12-06 01:14:39.224456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.811 [2024-12-06 01:14:39.231645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.811 [2024-12-06 01:14:39.231770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.811 [2024-12-06 01:14:39.231813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.811 [2024-12-06 01:14:39.238959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.811 [2024-12-06 01:14:39.239062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.811 [2024-12-06 01:14:39.239100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.811 [2024-12-06 01:14:39.246435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.811 [2024-12-06 01:14:39.246551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.811 [2024-12-06 01:14:39.246595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.811 [2024-12-06 01:14:39.253743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.811 [2024-12-06 01:14:39.253877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.811 [2024-12-06 01:14:39.253916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.811 [2024-12-06 01:14:39.261021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.811 [2024-12-06 01:14:39.261117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.811 [2024-12-06 01:14:39.261155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.811 [2024-12-06 01:14:39.268297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.268425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.268468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.275362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.275499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.275541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.282500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.282622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.282664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.289641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.289796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.289864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.297075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.297200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.297243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.304457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.304574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.304618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.311897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.312018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.312057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.319427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.319533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.319574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.326769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.326922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.326960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.334075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.334197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.334241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.341574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.341691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.341733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.348935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.349036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.349074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.356285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.356428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.356471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.363294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.363422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.363465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.370803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.370954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.370992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.378109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.378246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.378288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.385756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.385907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.385951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.393405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.393547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.393589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.401192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.401329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.401372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.408369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.408504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.408543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.415626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.415762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.415834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.422766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.422944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.422983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.430370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.430499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.430542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.438039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.438135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.438174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.445582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.445699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.445742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.452945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.453050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.812 [2024-12-06 01:14:39.453088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.812 [2024-12-06 01:14:39.460620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.812 [2024-12-06 01:14:39.460768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.813 [2024-12-06 01:14:39.460811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.813 [2024-12-06 01:14:39.468229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.813 [2024-12-06 01:14:39.468347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.813 [2024-12-06 01:14:39.468390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.813 [2024-12-06 01:14:39.475682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.813 [2024-12-06 01:14:39.475805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.813 [2024-12-06 01:14:39.475874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.813 [2024-12-06 01:14:39.483156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.813 [2024-12-06 01:14:39.483285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.813 [2024-12-06 01:14:39.483327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:06.813 [2024-12-06 01:14:39.490565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.813 [2024-12-06 01:14:39.490680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.813 [2024-12-06 01:14:39.490721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:06.813 [2024-12-06 01:14:39.497655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.813 [2024-12-06 01:14:39.497775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.813 [2024-12-06 01:14:39.497829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:06.813 [2024-12-06 01:14:39.504893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.813 [2024-12-06 01:14:39.505002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.813 [2024-12-06 01:14:39.505041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:06.813 [2024-12-06 01:14:39.512419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:06.813 [2024-12-06 01:14:39.512536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.813 [2024-12-06 01:14:39.512580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:07.073 [2024-12-06 01:14:39.519891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.073 [2024-12-06 01:14:39.520001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.073 [2024-12-06 01:14:39.520039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:07.073 [2024-12-06 01:14:39.527305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.073 [2024-12-06 01:14:39.527432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.073 [2024-12-06 01:14:39.527476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:07.073 [2024-12-06 01:14:39.534766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.073 [2024-12-06 01:14:39.534904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.073 [2024-12-06 01:14:39.534949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:07.073 [2024-12-06 01:14:39.542330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.073 [2024-12-06 01:14:39.542444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.073 [2024-12-06 01:14:39.542500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:07.073 [2024-12-06 01:14:39.550046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.073 [2024-12-06 01:14:39.550153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.073 [2024-12-06 01:14:39.550191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:07.073 [2024-12-06 01:14:39.557440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.073 [2024-12-06 01:14:39.557577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.073 [2024-12-06 01:14:39.557618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:07.073 [2024-12-06 01:14:39.564926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.073 [2024-12-06 01:14:39.565035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.073 [2024-12-06 01:14:39.565074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:07.073 [2024-12-06 01:14:39.572398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.073 [2024-12-06 01:14:39.572508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.073 [2024-12-06 01:14:39.572551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:07.073 [2024-12-06 01:14:39.580018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.073 [2024-12-06 01:14:39.580119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.073 [2024-12-06 01:14:39.580158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:07.073 [2024-12-06 01:14:39.587485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.073 [2024-12-06 01:14:39.587601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.073 [2024-12-06 01:14:39.587642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:07.073 [2024-12-06 01:14:39.594534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.073 [2024-12-06 01:14:39.594666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.073 [2024-12-06 01:14:39.594709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:07.073 [2024-12-06 01:14:39.601669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.073 [2024-12-06 01:14:39.601802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.073 [2024-12-06 01:14:39.601871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:07.073 [2024-12-06 01:14:39.608836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.608985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.609024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.615938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.616061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.616100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.623125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.623262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.623304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.630635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.630756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.630799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.638424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.638575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.638618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.645675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.645781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.645830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.653309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.653413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.653455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.660606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.660720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.660765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.668096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.668217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.668259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.675442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.675563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.675605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.682948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.683045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.683085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.690436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.690542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.690583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.698104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.698250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.698292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.705645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.705762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.705804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.713229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.713340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.713381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.720806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.720967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.721005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.728553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.728659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.728706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.736219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.736329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.736387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.743837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.743998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.744037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.751259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.751406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.751450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.758634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.758766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.758809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.766076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.766206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.766244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.773343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.074 [2024-12-06 01:14:39.773479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.074 [2024-12-06 01:14:39.773522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:07.074 [2024-12-06 01:14:39.780583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.333 [2024-12-06 01:14:39.780727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.333 [2024-12-06 01:14:39.780772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:07.333 [2024-12-06 01:14:39.787727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.333 [2024-12-06 01:14:39.787892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.333 [2024-12-06 01:14:39.787931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:07.333 [2024-12-06 01:14:39.794986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.333 [2024-12-06 01:14:39.795118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.333 [2024-12-06 01:14:39.795157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:07.333 [2024-12-06 01:14:39.802351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.333 [2024-12-06 01:14:39.802496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.333 [2024-12-06 01:14:39.802538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:07.333 [2024-12-06 01:14:39.809529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.333 [2024-12-06 01:14:39.809672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.333 [2024-12-06 01:14:39.809715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:07.333 [2024-12-06 01:14:39.816859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.333 [2024-12-06 01:14:39.816990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.333 [2024-12-06 01:14:39.817028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:07.333 [2024-12-06 01:14:39.823952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.333 [2024-12-06 01:14:39.824080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.333 [2024-12-06 01:14:39.824119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:07.333 [2024-12-06 01:14:39.831114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.333 [2024-12-06 01:14:39.831271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.333 [2024-12-06 01:14:39.831314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:07.334 [2024-12-06 01:14:39.838211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.334 [2024-12-06 01:14:39.838359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.334 [2024-12-06 01:14:39.838402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:07.334 [2024-12-06 01:14:39.845451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.334 [2024-12-06 01:14:39.845593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.334 [2024-12-06 01:14:39.845637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:07.334 [2024-12-06 01:14:39.852672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.334 [2024-12-06 01:14:39.852825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.334 [2024-12-06 01:14:39.852881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:07.334 [2024-12-06 01:14:39.859984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.334 [2024-12-06 01:14:39.860119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.334 [2024-12-06 01:14:39.860168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:07.334 [2024-12-06 01:14:39.867246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.334 [2024-12-06 01:14:39.867390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.334 [2024-12-06 01:14:39.867432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:07.334 [2024-12-06 01:14:39.874454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.334 [2024-12-06 01:14:39.874598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.334 [2024-12-06 01:14:39.874641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:07.334 [2024-12-06 01:14:39.881650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.334 [2024-12-06 01:14:39.881795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.334 [2024-12-06 01:14:39.881863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:07.334 [2024-12-06 01:14:39.888822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.334 [2024-12-06 01:14:39.888972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.334 [2024-12-06 01:14:39.889010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:07.334 [2024-12-06 01:14:39.896055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.334 [2024-12-06 01:14:39.896190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.334 [2024-12-06 01:14:39.896229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:07.334 4169.00 IOPS, 521.12 MiB/s [2024-12-06T00:14:40.043Z] [2024-12-06 01:14:39.904701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:07.334 [2024-12-06 01:14:39.904874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.334 [2024-12-06 01:14:39.904915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:07.334 00:37:07.334 Latency(us) 00:37:07.334 [2024-12-06T00:14:40.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:07.334 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:07.334 nvme0n1 : 2.01 4165.71 520.71 0.00 0.00 3829.89 2924.85 11068.30 00:37:07.334 [2024-12-06T00:14:40.043Z] =================================================================================================================== 00:37:07.334 [2024-12-06T00:14:40.043Z] Total : 4165.71 520.71 0.00 0.00 3829.89 2924.85 11068.30 00:37:07.334 { 00:37:07.334 "results": [ 00:37:07.334 { 00:37:07.334 "job": "nvme0n1", 00:37:07.334 "core_mask": "0x2", 00:37:07.334 "workload": "randwrite", 00:37:07.334 "status": "finished", 00:37:07.334 "queue_depth": 16, 00:37:07.334 "io_size": 131072, 00:37:07.334 "runtime": 2.005659, 00:37:07.334 "iops": 4165.713114741838, 00:37:07.334 "mibps": 520.7141393427297, 00:37:07.334 "io_failed": 0, 00:37:07.334 "io_timeout": 0, 00:37:07.334 "avg_latency_us": 3829.888280869739, 00:37:07.334 "min_latency_us": 2924.8474074074074, 00:37:07.334 "max_latency_us": 11068.302222222223 00:37:07.334 } 00:37:07.334 ], 00:37:07.334 "core_count": 1 00:37:07.334 } 00:37:07.334 01:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:07.334 01:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:07.334 01:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:07.334 | .driver_specific 00:37:07.334 | .nvme_error 00:37:07.334 | .status_code 00:37:07.334 | .command_transient_transport_error' 00:37:07.334 01:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:07.593 01:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 270 > 0 )) 00:37:07.593 01:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3134626 00:37:07.593 01:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3134626 ']' 00:37:07.593 01:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3134626 00:37:07.593 01:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:07.593 01:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:07.593 01:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3134626 00:37:07.593 01:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:07.593 01:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:07.593 01:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3134626' 00:37:07.593 killing process with pid 3134626 00:37:07.593 01:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3134626 00:37:07.593 Received shutdown signal, test time was about 2.000000 seconds 00:37:07.593 00:37:07.593 Latency(us) 00:37:07.593 [2024-12-06T00:14:40.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:07.593 [2024-12-06T00:14:40.302Z] =================================================================================================================== 00:37:07.593 [2024-12-06T00:14:40.302Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:07.593 01:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3134626 00:37:08.528 01:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3132593 00:37:08.528 01:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3132593 ']' 00:37:08.528 01:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3132593 00:37:08.528 01:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:08.528 01:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:08.528 01:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3132593 00:37:08.528 01:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:08.528 01:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:08.528 01:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3132593' 00:37:08.528 killing process with pid 3132593 00:37:08.528 01:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3132593 00:37:08.528 01:14:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3132593 00:37:09.951 00:37:09.951 real 0m23.707s 00:37:09.951 user 0m46.763s 00:37:09.951 sys 0m4.670s 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:09.951 ************************************ 00:37:09.951 END TEST nvmf_digest_error 00:37:09.951 ************************************ 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:09.951 rmmod nvme_tcp 00:37:09.951 rmmod nvme_fabrics 00:37:09.951 rmmod nvme_keyring 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3132593 ']' 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3132593 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3132593 ']' 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3132593 00:37:09.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3132593) - No such process 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3132593 is not found' 00:37:09.951 Process with pid 3132593 is not found 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:09.951 01:14:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:11.880 01:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:11.880 00:37:11.880 real 0m52.660s 00:37:11.880 user 1m35.564s 00:37:11.880 sys 0m10.982s 00:37:11.880 01:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:11.880 01:14:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:11.880 ************************************ 00:37:11.880 END TEST nvmf_digest 00:37:11.880 ************************************ 00:37:11.880 01:14:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:11.880 01:14:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:37:11.880 01:14:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:37:11.880 01:14:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:11.880 01:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:11.880 01:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:11.880 01:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.880 ************************************ 00:37:11.880 START TEST nvmf_bdevperf 00:37:11.880 ************************************ 00:37:11.880 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:11.880 * Looking for test storage... 00:37:11.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:11.880 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:11.880 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:37:11.880 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:12.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.139 --rc genhtml_branch_coverage=1 00:37:12.139 --rc genhtml_function_coverage=1 00:37:12.139 --rc genhtml_legend=1 00:37:12.139 --rc geninfo_all_blocks=1 00:37:12.139 --rc geninfo_unexecuted_blocks=1 00:37:12.139 00:37:12.139 ' 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:12.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.139 --rc genhtml_branch_coverage=1 00:37:12.139 --rc genhtml_function_coverage=1 00:37:12.139 --rc genhtml_legend=1 00:37:12.139 --rc geninfo_all_blocks=1 00:37:12.139 --rc geninfo_unexecuted_blocks=1 00:37:12.139 00:37:12.139 ' 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:12.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.139 --rc genhtml_branch_coverage=1 00:37:12.139 --rc genhtml_function_coverage=1 00:37:12.139 --rc genhtml_legend=1 00:37:12.139 --rc geninfo_all_blocks=1 00:37:12.139 --rc geninfo_unexecuted_blocks=1 00:37:12.139 00:37:12.139 ' 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:12.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.139 --rc genhtml_branch_coverage=1 00:37:12.139 --rc genhtml_function_coverage=1 00:37:12.139 --rc genhtml_legend=1 00:37:12.139 --rc geninfo_all_blocks=1 00:37:12.139 --rc geninfo_unexecuted_blocks=1 00:37:12.139 00:37:12.139 ' 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:12.139 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:12.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:37:12.140 01:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:14.671 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:14.671 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:14.671 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:14.671 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:14.671 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:14.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:14.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:37:14.672 00:37:14.672 --- 10.0.0.2 ping statistics --- 00:37:14.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.672 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:14.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:14.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:37:14.672 00:37:14.672 --- 10.0.0.1 ping statistics --- 00:37:14.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.672 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3137361 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3137361 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3137361 ']' 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:14.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:14.672 01:14:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:14.672 [2024-12-06 01:14:47.045760] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:37:14.672 [2024-12-06 01:14:47.045952] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:14.672 [2024-12-06 01:14:47.222162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:14.672 [2024-12-06 01:14:47.366998] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:14.672 [2024-12-06 01:14:47.367092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:14.672 [2024-12-06 01:14:47.367117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:14.672 [2024-12-06 01:14:47.367141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:14.672 [2024-12-06 01:14:47.367160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:14.672 [2024-12-06 01:14:47.369891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:14.672 [2024-12-06 01:14:47.369947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:14.672 [2024-12-06 01:14:47.369951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:15.606 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:15.606 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:15.606 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:15.606 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:15.606 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:15.606 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:15.606 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:15.606 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.606 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:15.606 [2024-12-06 01:14:48.061611] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:15.606 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.606 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:15.606 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.606 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:15.606 Malloc0 00:37:15.606 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.606 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:15.606 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.607 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:15.607 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.607 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:15.607 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.607 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:15.607 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.607 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:15.607 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.607 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:15.607 [2024-12-06 01:14:48.175487] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:15.607 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.607 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:15.607 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:15.607 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:15.607 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:15.607 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:15.607 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:15.607 { 00:37:15.607 "params": { 00:37:15.607 "name": "Nvme$subsystem", 00:37:15.607 "trtype": "$TEST_TRANSPORT", 00:37:15.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:15.607 "adrfam": "ipv4", 00:37:15.607 "trsvcid": "$NVMF_PORT", 00:37:15.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:15.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:15.607 "hdgst": ${hdgst:-false}, 00:37:15.607 "ddgst": ${ddgst:-false} 00:37:15.607 }, 00:37:15.607 "method": "bdev_nvme_attach_controller" 00:37:15.607 } 00:37:15.607 EOF 00:37:15.607 )") 00:37:15.607 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:15.607 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:15.607 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:15.607 01:14:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:15.607 "params": { 00:37:15.607 "name": "Nvme1", 00:37:15.607 "trtype": "tcp", 00:37:15.607 "traddr": "10.0.0.2", 00:37:15.607 "adrfam": "ipv4", 00:37:15.607 "trsvcid": "4420", 00:37:15.607 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:15.607 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:15.607 "hdgst": false, 00:37:15.607 "ddgst": false 00:37:15.607 }, 00:37:15.607 "method": "bdev_nvme_attach_controller" 00:37:15.607 }' 00:37:15.607 [2024-12-06 01:14:48.259889] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:37:15.607 [2024-12-06 01:14:48.260011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3137523 ] 00:37:15.865 [2024-12-06 01:14:48.397127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:15.865 [2024-12-06 01:14:48.521698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:16.432 Running I/O for 1 seconds... 00:37:17.366 6062.00 IOPS, 23.68 MiB/s 00:37:17.366 Latency(us) 00:37:17.366 [2024-12-06T00:14:50.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:17.366 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:17.366 Verification LBA range: start 0x0 length 0x4000 00:37:17.366 Nvme1n1 : 1.02 6077.76 23.74 0.00 0.00 20960.35 4563.25 17476.27 00:37:17.366 [2024-12-06T00:14:50.075Z] =================================================================================================================== 00:37:17.366 [2024-12-06T00:14:50.075Z] Total : 6077.76 23.74 0.00 0.00 20960.35 4563.25 17476.27 00:37:18.300 01:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3137849 00:37:18.300 01:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:18.300 01:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:18.300 01:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:18.300 01:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:18.300 01:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:18.300 01:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:18.300 01:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:18.300 { 00:37:18.300 "params": { 00:37:18.300 "name": "Nvme$subsystem", 00:37:18.300 "trtype": "$TEST_TRANSPORT", 00:37:18.300 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:18.300 "adrfam": "ipv4", 00:37:18.300 "trsvcid": "$NVMF_PORT", 00:37:18.300 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:18.300 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:18.300 "hdgst": ${hdgst:-false}, 00:37:18.300 "ddgst": ${ddgst:-false} 00:37:18.300 }, 00:37:18.300 "method": "bdev_nvme_attach_controller" 00:37:18.300 } 00:37:18.300 EOF 00:37:18.300 )") 00:37:18.300 01:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:18.300 01:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:18.300 01:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:18.300 01:14:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:18.300 "params": { 00:37:18.300 "name": "Nvme1", 00:37:18.300 "trtype": "tcp", 00:37:18.300 "traddr": "10.0.0.2", 00:37:18.300 "adrfam": "ipv4", 00:37:18.300 "trsvcid": "4420", 00:37:18.300 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:18.300 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:18.300 "hdgst": false, 00:37:18.300 "ddgst": false 00:37:18.300 }, 00:37:18.300 "method": "bdev_nvme_attach_controller" 00:37:18.300 }' 00:37:18.300 [2024-12-06 01:14:50.948694] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:37:18.300 [2024-12-06 01:14:50.948842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3137849 ] 00:37:18.558 [2024-12-06 01:14:51.084577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:18.558 [2024-12-06 01:14:51.210691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:19.126 Running I/O for 15 seconds... 00:37:21.434 6203.00 IOPS, 24.23 MiB/s [2024-12-06T00:14:54.143Z] 6213.50 IOPS, 24.27 MiB/s [2024-12-06T00:14:54.144Z] 01:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3137361 00:37:21.435 01:14:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:21.435 [2024-12-06 01:14:53.888276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.888365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.888438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.888468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.888500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.888527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.888556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.888581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.888609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.888634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.888663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.888700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.888728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.888753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.888779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.888803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.888841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.888881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.888928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.888950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.888976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.888998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.889022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.889044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.889068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.889090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.889115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.889138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.889161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.889183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.889207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.889247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.889271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.889307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.889329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.889349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.889376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.889396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.889419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.889439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.889460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.889480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.889502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.889522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.889544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.889564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.889585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.889605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.889627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.889646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.889669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.889688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.889711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.889730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.889752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.889772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.889809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.889839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.889880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.889903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.889927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.889948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.889978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:21.435 [2024-12-06 01:14:53.890001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.890025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.890047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.890071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.890107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.890131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.890152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.890189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.890209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.890230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.435 [2024-12-06 01:14:53.890250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.435 [2024-12-06 01:14:53.890271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.890291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.890312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:107832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.890332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.890355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.890375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.890396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.890415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.890438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.890458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.890479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.890499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.890520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.890543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.890566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.890585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.890607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.890626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.890647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.890667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.890689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.890708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.890730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.890748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.890770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.890790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.890837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.890876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.890902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.890924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.890948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.890970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.890993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.891039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:107960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.891086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.891150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.891207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.891248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.891288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.891328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.891368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.891408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:108024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.891449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.891488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.891528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.891568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:108056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.891608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:108064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.891648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.891693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.891734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.891775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.891841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.891919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.891967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.891988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.892013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.892035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.436 [2024-12-06 01:14:53.892058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.436 [2024-12-06 01:14:53.892080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.892117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.892138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.892160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.892194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.892216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.892236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.892257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.892276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.892302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.892322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.892343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.892362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.892383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.892402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.892424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.892442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.892464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.892482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.892504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.892531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.892554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.892572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.892594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.892613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.892634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.892653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.892674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.892693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.892715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.892735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.892756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.892776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.892811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.892865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.892892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.892913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.892937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.892959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.892987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.893008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.893032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.893053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.893077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.893098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.893138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.893159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.893197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.893217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.893238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.893257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.893279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.893299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.893320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.893339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.893360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.893379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.893401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.893420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.893446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.893465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.893487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.893506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.893527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.893546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.893567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.893586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.893607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.893626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.893647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.893666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.893686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.893706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.893727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.893746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.893767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.893785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.893832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.437 [2024-12-06 01:14:53.893856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.437 [2024-12-06 01:14:53.893896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.438 [2024-12-06 01:14:53.893918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.438 [2024-12-06 01:14:53.893941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.438 [2024-12-06 01:14:53.893963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.438 [2024-12-06 01:14:53.893987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.438 [2024-12-06 01:14:53.894014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.438 [2024-12-06 01:14:53.894039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.438 [2024-12-06 01:14:53.894061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.438 [2024-12-06 01:14:53.894085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.438 [2024-12-06 01:14:53.894122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.438 [2024-12-06 01:14:53.894146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.438 [2024-12-06 01:14:53.894165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.438 [2024-12-06 01:14:53.894203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.438 [2024-12-06 01:14:53.894222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.438 [2024-12-06 01:14:53.894243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.438 [2024-12-06 01:14:53.894262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.438 [2024-12-06 01:14:53.894283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.438 [2024-12-06 01:14:53.894302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.438 [2024-12-06 01:14:53.894323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.438 [2024-12-06 01:14:53.894342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.438 [2024-12-06 01:14:53.894364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.438 [2024-12-06 01:14:53.894383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.438 [2024-12-06 01:14:53.894402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:37:21.438 [2024-12-06 01:14:53.894426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:21.438 [2024-12-06 01:14:53.894444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:21.438 [2024-12-06 01:14:53.894461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108544 len:8 PRP1 0x0 PRP2 0x0 00:37:21.438 [2024-12-06 01:14:53.894480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.438 [2024-12-06 01:14:53.894849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:21.438 [2024-12-06 01:14:53.894897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.438 [2024-12-06 01:14:53.894930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:21.438 [2024-12-06 01:14:53.894952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.438 [2024-12-06 01:14:53.894979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:21.438 [2024-12-06 01:14:53.895000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.438 [2024-12-06 01:14:53.895021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:21.438 [2024-12-06 01:14:53.895041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.438 [2024-12-06 01:14:53.895061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.438 [2024-12-06 01:14:53.899242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.438 [2024-12-06 01:14:53.899333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.438 [2024-12-06 01:14:53.900161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-12-06 01:14:53.900222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.438 [2024-12-06 01:14:53.900251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.438 [2024-12-06 01:14:53.900546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.438 [2024-12-06 01:14:53.900890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.438 [2024-12-06 01:14:53.900921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.438 [2024-12-06 01:14:53.900960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.438 [2024-12-06 01:14:53.900987] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.438 [2024-12-06 01:14:53.914140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.438 [2024-12-06 01:14:53.914633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-12-06 01:14:53.914676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.438 [2024-12-06 01:14:53.914703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.438 [2024-12-06 01:14:53.915014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.438 [2024-12-06 01:14:53.915313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.438 [2024-12-06 01:14:53.915345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.438 [2024-12-06 01:14:53.915368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.438 [2024-12-06 01:14:53.915390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.438 [2024-12-06 01:14:53.928794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.438 [2024-12-06 01:14:53.929244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-12-06 01:14:53.929287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.438 [2024-12-06 01:14:53.929314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.438 [2024-12-06 01:14:53.929610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.438 [2024-12-06 01:14:53.929933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.438 [2024-12-06 01:14:53.929966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.438 [2024-12-06 01:14:53.929989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.438 [2024-12-06 01:14:53.930010] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.438 [2024-12-06 01:14:53.943354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.438 [2024-12-06 01:14:53.943833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-12-06 01:14:53.943875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.438 [2024-12-06 01:14:53.943900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.438 [2024-12-06 01:14:53.944190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.438 [2024-12-06 01:14:53.944483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.438 [2024-12-06 01:14:53.944514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.438 [2024-12-06 01:14:53.944536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.438 [2024-12-06 01:14:53.944558] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.438 [2024-12-06 01:14:53.958019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.438 [2024-12-06 01:14:53.958489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-12-06 01:14:53.958532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.438 [2024-12-06 01:14:53.958559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.438 [2024-12-06 01:14:53.958861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.438 [2024-12-06 01:14:53.959153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.438 [2024-12-06 01:14:53.959183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.438 [2024-12-06 01:14:53.959205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.438 [2024-12-06 01:14:53.959227] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.438 [2024-12-06 01:14:53.972712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.438 [2024-12-06 01:14:53.973193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-12-06 01:14:53.973234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.438 [2024-12-06 01:14:53.973260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.439 [2024-12-06 01:14:53.973546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.439 [2024-12-06 01:14:53.973850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.439 [2024-12-06 01:14:53.973889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.439 [2024-12-06 01:14:53.973912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.439 [2024-12-06 01:14:53.973934] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.439 [2024-12-06 01:14:53.987406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.439 [2024-12-06 01:14:53.987853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-12-06 01:14:53.987895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.439 [2024-12-06 01:14:53.987922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.439 [2024-12-06 01:14:53.988209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.439 [2024-12-06 01:14:53.988501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.439 [2024-12-06 01:14:53.988531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.439 [2024-12-06 01:14:53.988553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.439 [2024-12-06 01:14:53.988575] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.439 [2024-12-06 01:14:54.002057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.439 [2024-12-06 01:14:54.002535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-12-06 01:14:54.002575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.439 [2024-12-06 01:14:54.002602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.439 [2024-12-06 01:14:54.002904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.439 [2024-12-06 01:14:54.003196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.439 [2024-12-06 01:14:54.003228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.439 [2024-12-06 01:14:54.003252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.439 [2024-12-06 01:14:54.003274] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.439 [2024-12-06 01:14:54.016562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.439 [2024-12-06 01:14:54.017052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-12-06 01:14:54.017094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.439 [2024-12-06 01:14:54.017120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.439 [2024-12-06 01:14:54.017407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.439 [2024-12-06 01:14:54.017700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.439 [2024-12-06 01:14:54.017730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.439 [2024-12-06 01:14:54.017752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.439 [2024-12-06 01:14:54.017781] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.439 [2024-12-06 01:14:54.031065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.439 [2024-12-06 01:14:54.031539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-12-06 01:14:54.031581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.439 [2024-12-06 01:14:54.031607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.439 [2024-12-06 01:14:54.031909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.439 [2024-12-06 01:14:54.032199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.439 [2024-12-06 01:14:54.032230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.439 [2024-12-06 01:14:54.032253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.439 [2024-12-06 01:14:54.032274] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.439 [2024-12-06 01:14:54.045756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.439 [2024-12-06 01:14:54.046221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-12-06 01:14:54.046262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.439 [2024-12-06 01:14:54.046288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.439 [2024-12-06 01:14:54.046574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.439 [2024-12-06 01:14:54.046879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.439 [2024-12-06 01:14:54.046911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.439 [2024-12-06 01:14:54.046933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.439 [2024-12-06 01:14:54.046954] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.439 [2024-12-06 01:14:54.060252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.439 [2024-12-06 01:14:54.060730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-12-06 01:14:54.060771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.439 [2024-12-06 01:14:54.060797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.439 [2024-12-06 01:14:54.061103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.439 [2024-12-06 01:14:54.061395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.439 [2024-12-06 01:14:54.061426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.439 [2024-12-06 01:14:54.061448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.439 [2024-12-06 01:14:54.061470] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.439 [2024-12-06 01:14:54.074774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.439 [2024-12-06 01:14:54.075252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-12-06 01:14:54.075293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.439 [2024-12-06 01:14:54.075319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.439 [2024-12-06 01:14:54.075607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.439 [2024-12-06 01:14:54.075910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.439 [2024-12-06 01:14:54.075942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.439 [2024-12-06 01:14:54.075965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.439 [2024-12-06 01:14:54.075987] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.439 [2024-12-06 01:14:54.089304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.439 [2024-12-06 01:14:54.089763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-12-06 01:14:54.089804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.439 [2024-12-06 01:14:54.089840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.439 [2024-12-06 01:14:54.090142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.439 [2024-12-06 01:14:54.090436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.439 [2024-12-06 01:14:54.090467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.439 [2024-12-06 01:14:54.090489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.440 [2024-12-06 01:14:54.090511] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.440 [2024-12-06 01:14:54.103827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.440 [2024-12-06 01:14:54.104327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-12-06 01:14:54.104369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.440 [2024-12-06 01:14:54.104396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.440 [2024-12-06 01:14:54.104682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.440 [2024-12-06 01:14:54.104995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.440 [2024-12-06 01:14:54.105027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.440 [2024-12-06 01:14:54.105068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.440 [2024-12-06 01:14:54.105090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.440 [2024-12-06 01:14:54.118480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.440 [2024-12-06 01:14:54.118961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-12-06 01:14:54.119002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.440 [2024-12-06 01:14:54.119034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.440 [2024-12-06 01:14:54.119324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.440 [2024-12-06 01:14:54.119616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.440 [2024-12-06 01:14:54.119646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.440 [2024-12-06 01:14:54.119668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.440 [2024-12-06 01:14:54.119690] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.440 [2024-12-06 01:14:54.133027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.440 [2024-12-06 01:14:54.133497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-12-06 01:14:54.133538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.440 [2024-12-06 01:14:54.133563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.440 [2024-12-06 01:14:54.133862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.440 [2024-12-06 01:14:54.134154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.440 [2024-12-06 01:14:54.134184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.440 [2024-12-06 01:14:54.134206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.440 [2024-12-06 01:14:54.134228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.698 [2024-12-06 01:14:54.147549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.699 [2024-12-06 01:14:54.148011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.699 [2024-12-06 01:14:54.148053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.699 [2024-12-06 01:14:54.148087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.699 [2024-12-06 01:14:54.148374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.699 [2024-12-06 01:14:54.148665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.699 [2024-12-06 01:14:54.148697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.699 [2024-12-06 01:14:54.148719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.699 [2024-12-06 01:14:54.148741] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.699 [2024-12-06 01:14:54.162077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.699 [2024-12-06 01:14:54.162542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.699 [2024-12-06 01:14:54.162583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.699 [2024-12-06 01:14:54.162608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.699 [2024-12-06 01:14:54.162907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.699 [2024-12-06 01:14:54.163205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.699 [2024-12-06 01:14:54.163236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.699 [2024-12-06 01:14:54.163259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.699 [2024-12-06 01:14:54.163281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.699 [2024-12-06 01:14:54.175944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.699 [2024-12-06 01:14:54.176414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.699 [2024-12-06 01:14:54.176450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.699 [2024-12-06 01:14:54.176474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.699 [2024-12-06 01:14:54.176757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.699 [2024-12-06 01:14:54.177037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.699 [2024-12-06 01:14:54.177066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.699 [2024-12-06 01:14:54.177085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.699 [2024-12-06 01:14:54.177118] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.699 [2024-12-06 01:14:54.189691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.699 [2024-12-06 01:14:54.190204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.699 [2024-12-06 01:14:54.190241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.699 [2024-12-06 01:14:54.190265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.699 [2024-12-06 01:14:54.190550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.699 [2024-12-06 01:14:54.190805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.699 [2024-12-06 01:14:54.190841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.699 [2024-12-06 01:14:54.190877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.699 [2024-12-06 01:14:54.190896] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.699 [2024-12-06 01:14:54.203643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.699 [2024-12-06 01:14:54.204068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.699 [2024-12-06 01:14:54.204120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.699 [2024-12-06 01:14:54.204143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.699 [2024-12-06 01:14:54.204421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.699 [2024-12-06 01:14:54.204658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.699 [2024-12-06 01:14:54.204685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.699 [2024-12-06 01:14:54.204710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.699 [2024-12-06 01:14:54.204728] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.699 [2024-12-06 01:14:54.217476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.699 [2024-12-06 01:14:54.218048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.699 [2024-12-06 01:14:54.218087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.699 [2024-12-06 01:14:54.218110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.699 [2024-12-06 01:14:54.218414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.699 [2024-12-06 01:14:54.218653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.699 [2024-12-06 01:14:54.218679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.699 [2024-12-06 01:14:54.218698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.699 [2024-12-06 01:14:54.218715] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.699 [2024-12-06 01:14:54.231343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.699 [2024-12-06 01:14:54.231739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.699 [2024-12-06 01:14:54.231776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.699 [2024-12-06 01:14:54.231799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.699 [2024-12-06 01:14:54.232099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.699 [2024-12-06 01:14:54.232355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.699 [2024-12-06 01:14:54.232381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.699 [2024-12-06 01:14:54.232399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.699 [2024-12-06 01:14:54.232417] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.699 [2024-12-06 01:14:54.245432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.699 [2024-12-06 01:14:54.245858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.699 [2024-12-06 01:14:54.245896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.699 [2024-12-06 01:14:54.245920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.699 [2024-12-06 01:14:54.246203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.699 [2024-12-06 01:14:54.246441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.699 [2024-12-06 01:14:54.246466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.699 [2024-12-06 01:14:54.246484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.699 [2024-12-06 01:14:54.246501] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.699 [2024-12-06 01:14:54.259286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.699 [2024-12-06 01:14:54.259856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.699 [2024-12-06 01:14:54.259894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.699 [2024-12-06 01:14:54.259917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.699 [2024-12-06 01:14:54.260203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.699 [2024-12-06 01:14:54.260441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.699 [2024-12-06 01:14:54.260467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.699 [2024-12-06 01:14:54.260485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.699 [2024-12-06 01:14:54.260502] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.699 [2024-12-06 01:14:54.273108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.699 [2024-12-06 01:14:54.273555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.699 [2024-12-06 01:14:54.273607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.699 [2024-12-06 01:14:54.273645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.699 [2024-12-06 01:14:54.273959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.700 [2024-12-06 01:14:54.274240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.700 [2024-12-06 01:14:54.274266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.700 [2024-12-06 01:14:54.274284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.700 [2024-12-06 01:14:54.274301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.700 [2024-12-06 01:14:54.286961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.700 [2024-12-06 01:14:54.287415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.700 [2024-12-06 01:14:54.287466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.700 [2024-12-06 01:14:54.287490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.700 [2024-12-06 01:14:54.287772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.700 [2024-12-06 01:14:54.288042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.700 [2024-12-06 01:14:54.288069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.700 [2024-12-06 01:14:54.288088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.700 [2024-12-06 01:14:54.288120] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.700 [2024-12-06 01:14:54.300856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.700 [2024-12-06 01:14:54.301357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.700 [2024-12-06 01:14:54.301400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.700 [2024-12-06 01:14:54.301423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.700 [2024-12-06 01:14:54.301706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.700 [2024-12-06 01:14:54.301980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.700 [2024-12-06 01:14:54.302008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.700 [2024-12-06 01:14:54.302026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.700 [2024-12-06 01:14:54.302044] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.700 [2024-12-06 01:14:54.314739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.700 [2024-12-06 01:14:54.315207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.700 [2024-12-06 01:14:54.315257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.700 [2024-12-06 01:14:54.315280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.700 [2024-12-06 01:14:54.315578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.700 [2024-12-06 01:14:54.315864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.700 [2024-12-06 01:14:54.315891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.700 [2024-12-06 01:14:54.315910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.700 [2024-12-06 01:14:54.315928] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.700 [2024-12-06 01:14:54.328606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.700 [2024-12-06 01:14:54.329020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.700 [2024-12-06 01:14:54.329058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.700 [2024-12-06 01:14:54.329081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.700 [2024-12-06 01:14:54.329379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.700 [2024-12-06 01:14:54.329618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.700 [2024-12-06 01:14:54.329644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.700 [2024-12-06 01:14:54.329662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.700 [2024-12-06 01:14:54.329679] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.700 [2024-12-06 01:14:54.342469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.700 [2024-12-06 01:14:54.342892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.700 [2024-12-06 01:14:54.342929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.700 [2024-12-06 01:14:54.342953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.700 [2024-12-06 01:14:54.343244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.700 [2024-12-06 01:14:54.343483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.700 [2024-12-06 01:14:54.343509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.700 [2024-12-06 01:14:54.343527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.700 [2024-12-06 01:14:54.343545] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.700 [2024-12-06 01:14:54.356305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.700 [2024-12-06 01:14:54.356755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.700 [2024-12-06 01:14:54.356806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.700 [2024-12-06 01:14:54.356841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.700 [2024-12-06 01:14:54.357130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.700 [2024-12-06 01:14:54.357369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.700 [2024-12-06 01:14:54.357394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.700 [2024-12-06 01:14:54.357412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.700 [2024-12-06 01:14:54.357430] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.700 [2024-12-06 01:14:54.370280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.700 [2024-12-06 01:14:54.370668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.700 [2024-12-06 01:14:54.370718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.700 [2024-12-06 01:14:54.370741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.700 [2024-12-06 01:14:54.371059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.700 [2024-12-06 01:14:54.371316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.700 [2024-12-06 01:14:54.371342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.700 [2024-12-06 01:14:54.371359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.700 [2024-12-06 01:14:54.371377] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.700 [2024-12-06 01:14:54.384225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.700 [2024-12-06 01:14:54.384714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.700 [2024-12-06 01:14:54.384751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.700 [2024-12-06 01:14:54.384774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.700 [2024-12-06 01:14:54.385055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.700 [2024-12-06 01:14:54.385335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.700 [2024-12-06 01:14:54.385369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.700 [2024-12-06 01:14:54.385388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.700 [2024-12-06 01:14:54.385406] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.700 [2024-12-06 01:14:54.398159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.700 [2024-12-06 01:14:54.398589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.700 [2024-12-06 01:14:54.398625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.700 [2024-12-06 01:14:54.398648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.700 [2024-12-06 01:14:54.398917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.700 [2024-12-06 01:14:54.399200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.700 [2024-12-06 01:14:54.399241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.700 [2024-12-06 01:14:54.399260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.700 [2024-12-06 01:14:54.399278] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.960 [2024-12-06 01:14:54.412409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.960 [2024-12-06 01:14:54.412876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-12-06 01:14:54.412928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.960 [2024-12-06 01:14:54.412953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.960 [2024-12-06 01:14:54.413242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.960 [2024-12-06 01:14:54.413485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.960 [2024-12-06 01:14:54.413511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.960 [2024-12-06 01:14:54.413530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.960 [2024-12-06 01:14:54.413547] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.960 [2024-12-06 01:14:54.426353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.960 [2024-12-06 01:14:54.426869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-12-06 01:14:54.426908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.960 [2024-12-06 01:14:54.426932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.960 [2024-12-06 01:14:54.427221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.960 [2024-12-06 01:14:54.427459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.960 [2024-12-06 01:14:54.427485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.960 [2024-12-06 01:14:54.427509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.960 [2024-12-06 01:14:54.427528] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.960 [2024-12-06 01:14:54.440279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.960 [2024-12-06 01:14:54.440714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-12-06 01:14:54.440751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.960 [2024-12-06 01:14:54.440774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.960 [2024-12-06 01:14:54.441070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.960 [2024-12-06 01:14:54.441345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.960 [2024-12-06 01:14:54.441370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.960 [2024-12-06 01:14:54.441389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.960 [2024-12-06 01:14:54.441407] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.960 [2024-12-06 01:14:54.454125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.960 [2024-12-06 01:14:54.454593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-12-06 01:14:54.454628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.960 [2024-12-06 01:14:54.454666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.960 [2024-12-06 01:14:54.454966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.960 [2024-12-06 01:14:54.455247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.960 [2024-12-06 01:14:54.455272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.960 [2024-12-06 01:14:54.455290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.960 [2024-12-06 01:14:54.455308] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.960 [2024-12-06 01:14:54.468044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.960 [2024-12-06 01:14:54.468454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-12-06 01:14:54.468491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.960 [2024-12-06 01:14:54.468514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.960 [2024-12-06 01:14:54.468792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.960 [2024-12-06 01:14:54.469071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.960 [2024-12-06 01:14:54.469118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.960 [2024-12-06 01:14:54.469137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.960 [2024-12-06 01:14:54.469156] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.960 [2024-12-06 01:14:54.481857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.960 [2024-12-06 01:14:54.482345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-12-06 01:14:54.482395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.961 [2024-12-06 01:14:54.482419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.961 [2024-12-06 01:14:54.482699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.961 [2024-12-06 01:14:54.482987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.961 [2024-12-06 01:14:54.483015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.961 [2024-12-06 01:14:54.483035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.961 [2024-12-06 01:14:54.483055] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.961 [2024-12-06 01:14:54.495723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.961 [2024-12-06 01:14:54.496268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-12-06 01:14:54.496306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.961 [2024-12-06 01:14:54.496330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.961 [2024-12-06 01:14:54.496620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.961 [2024-12-06 01:14:54.496908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.961 [2024-12-06 01:14:54.496937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.961 [2024-12-06 01:14:54.496958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.961 [2024-12-06 01:14:54.496977] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.961 [2024-12-06 01:14:54.509682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.961 [2024-12-06 01:14:54.510136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-12-06 01:14:54.510199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.961 [2024-12-06 01:14:54.510238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.961 [2024-12-06 01:14:54.510526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.961 [2024-12-06 01:14:54.510765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.961 [2024-12-06 01:14:54.510791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.961 [2024-12-06 01:14:54.510835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.961 [2024-12-06 01:14:54.510865] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.961 [2024-12-06 01:14:54.523532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.961 [2024-12-06 01:14:54.523949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-12-06 01:14:54.524002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.961 [2024-12-06 01:14:54.524032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.961 [2024-12-06 01:14:54.524323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.961 [2024-12-06 01:14:54.524562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.961 [2024-12-06 01:14:54.524589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.961 [2024-12-06 01:14:54.524607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.961 [2024-12-06 01:14:54.524624] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.961 [2024-12-06 01:14:54.537519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.961 [2024-12-06 01:14:54.538010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-12-06 01:14:54.538046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.961 [2024-12-06 01:14:54.538070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.961 [2024-12-06 01:14:54.538380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.961 [2024-12-06 01:14:54.538628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.961 [2024-12-06 01:14:54.538654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.961 [2024-12-06 01:14:54.538688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.961 [2024-12-06 01:14:54.538707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.961 [2024-12-06 01:14:54.551319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.961 [2024-12-06 01:14:54.551892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-12-06 01:14:54.551929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.961 [2024-12-06 01:14:54.551952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.961 [2024-12-06 01:14:54.552242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.961 [2024-12-06 01:14:54.552481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.961 [2024-12-06 01:14:54.552506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.961 [2024-12-06 01:14:54.552524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.961 [2024-12-06 01:14:54.552542] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.961 [2024-12-06 01:14:54.565210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.961 [2024-12-06 01:14:54.565697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-12-06 01:14:54.565733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.961 [2024-12-06 01:14:54.565757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.961 [2024-12-06 01:14:54.566045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.961 [2024-12-06 01:14:54.566320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.961 [2024-12-06 01:14:54.566346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.961 [2024-12-06 01:14:54.566364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.961 [2024-12-06 01:14:54.566381] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.961 [2024-12-06 01:14:54.579003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.961 [2024-12-06 01:14:54.579585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-12-06 01:14:54.579623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.961 [2024-12-06 01:14:54.579646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.961 [2024-12-06 01:14:54.579929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.961 [2024-12-06 01:14:54.580211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.961 [2024-12-06 01:14:54.580237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.961 [2024-12-06 01:14:54.580255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.961 [2024-12-06 01:14:54.580272] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.961 [2024-12-06 01:14:54.592727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.961 [2024-12-06 01:14:54.593280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-12-06 01:14:54.593318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.961 [2024-12-06 01:14:54.593341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.961 [2024-12-06 01:14:54.593643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.961 [2024-12-06 01:14:54.593936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.961 [2024-12-06 01:14:54.593964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.961 [2024-12-06 01:14:54.593999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.961 [2024-12-06 01:14:54.594019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.961 [2024-12-06 01:14:54.606546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.961 [2024-12-06 01:14:54.607028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-12-06 01:14:54.607065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.961 [2024-12-06 01:14:54.607089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.961 [2024-12-06 01:14:54.607373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.961 [2024-12-06 01:14:54.607611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.961 [2024-12-06 01:14:54.607642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.961 [2024-12-06 01:14:54.607661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.961 [2024-12-06 01:14:54.607679] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.962 [2024-12-06 01:14:54.620324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.962 [2024-12-06 01:14:54.620791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-12-06 01:14:54.620848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.962 [2024-12-06 01:14:54.620873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.962 [2024-12-06 01:14:54.621162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.962 [2024-12-06 01:14:54.621417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.962 [2024-12-06 01:14:54.621443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.962 [2024-12-06 01:14:54.621461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.962 [2024-12-06 01:14:54.621479] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.962 [2024-12-06 01:14:54.634294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.962 [2024-12-06 01:14:54.634700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-12-06 01:14:54.634736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.962 [2024-12-06 01:14:54.634759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.962 [2024-12-06 01:14:54.635071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.962 [2024-12-06 01:14:54.635328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.962 [2024-12-06 01:14:54.635354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.962 [2024-12-06 01:14:54.635372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.962 [2024-12-06 01:14:54.635389] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.962 [2024-12-06 01:14:54.648089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.962 [2024-12-06 01:14:54.648518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-12-06 01:14:54.648555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.962 [2024-12-06 01:14:54.648579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.962 [2024-12-06 01:14:54.648879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.962 [2024-12-06 01:14:54.649154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.962 [2024-12-06 01:14:54.649181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.962 [2024-12-06 01:14:54.649200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.962 [2024-12-06 01:14:54.649226] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.962 [2024-12-06 01:14:54.662016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.962 [2024-12-06 01:14:54.662575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-12-06 01:14:54.662611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.962 [2024-12-06 01:14:54.662650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.962 [2024-12-06 01:14:54.662949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.962 [2024-12-06 01:14:54.663212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.962 [2024-12-06 01:14:54.663240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.962 [2024-12-06 01:14:54.663260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.962 [2024-12-06 01:14:54.663279] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.222 [2024-12-06 01:14:54.676009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.222 [2024-12-06 01:14:54.676433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.222 [2024-12-06 01:14:54.676469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.222 [2024-12-06 01:14:54.676492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.222 [2024-12-06 01:14:54.676778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.222 [2024-12-06 01:14:54.677075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.222 [2024-12-06 01:14:54.677104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.222 [2024-12-06 01:14:54.677124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.222 [2024-12-06 01:14:54.677143] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.222 [2024-12-06 01:14:54.689848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.222 [2024-12-06 01:14:54.690306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.222 [2024-12-06 01:14:54.690343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.222 [2024-12-06 01:14:54.690367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.222 [2024-12-06 01:14:54.690667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.222 [2024-12-06 01:14:54.690952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.222 [2024-12-06 01:14:54.690980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.222 [2024-12-06 01:14:54.691000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.222 [2024-12-06 01:14:54.691019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.222 [2024-12-06 01:14:54.703694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.222 [2024-12-06 01:14:54.704128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.222 [2024-12-06 01:14:54.704164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.222 [2024-12-06 01:14:54.704187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.222 [2024-12-06 01:14:54.704459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.222 [2024-12-06 01:14:54.704699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.222 [2024-12-06 01:14:54.704724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.222 [2024-12-06 01:14:54.704743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.222 [2024-12-06 01:14:54.704760] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.222 4480.67 IOPS, 17.50 MiB/s [2024-12-06T00:14:54.931Z] [2024-12-06 01:14:54.717640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.222 [2024-12-06 01:14:54.718080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.222 [2024-12-06 01:14:54.718118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.222 [2024-12-06 01:14:54.718142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.222 [2024-12-06 01:14:54.718424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.222 [2024-12-06 01:14:54.718662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.222 [2024-12-06 01:14:54.718688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.222 [2024-12-06 01:14:54.718707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.222 [2024-12-06 01:14:54.718725] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.222 [2024-12-06 01:14:54.731513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.222 [2024-12-06 01:14:54.731984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.222 [2024-12-06 01:14:54.732036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.222 [2024-12-06 01:14:54.732060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.222 [2024-12-06 01:14:54.732359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.223 [2024-12-06 01:14:54.732602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.223 [2024-12-06 01:14:54.732628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.223 [2024-12-06 01:14:54.732646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.223 [2024-12-06 01:14:54.732664] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.223 [2024-12-06 01:14:54.745348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.223 [2024-12-06 01:14:54.745894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.223 [2024-12-06 01:14:54.745931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.223 [2024-12-06 01:14:54.745960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.223 [2024-12-06 01:14:54.746259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.223 [2024-12-06 01:14:54.746501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.223 [2024-12-06 01:14:54.746527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.223 [2024-12-06 01:14:54.746546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.223 [2024-12-06 01:14:54.746564] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.223 [2024-12-06 01:14:54.759259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.223 [2024-12-06 01:14:54.759756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.223 [2024-12-06 01:14:54.759807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.223 [2024-12-06 01:14:54.759842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.223 [2024-12-06 01:14:54.760118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.223 [2024-12-06 01:14:54.760372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.223 [2024-12-06 01:14:54.760397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.223 [2024-12-06 01:14:54.760415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.223 [2024-12-06 01:14:54.760432] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.223 [2024-12-06 01:14:54.773175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.223 [2024-12-06 01:14:54.773628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.223 [2024-12-06 01:14:54.773678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.223 [2024-12-06 01:14:54.773701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.223 [2024-12-06 01:14:54.773987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.223 [2024-12-06 01:14:54.774270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.223 [2024-12-06 01:14:54.774296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.223 [2024-12-06 01:14:54.774314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.223 [2024-12-06 01:14:54.774332] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.223 [2024-12-06 01:14:54.786944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.223 [2024-12-06 01:14:54.787351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.223 [2024-12-06 01:14:54.787401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.223 [2024-12-06 01:14:54.787425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.223 [2024-12-06 01:14:54.787711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.223 [2024-12-06 01:14:54.788013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.223 [2024-12-06 01:14:54.788042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.223 [2024-12-06 01:14:54.788062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.223 [2024-12-06 01:14:54.788082] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.223 [2024-12-06 01:14:54.800731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.223 [2024-12-06 01:14:54.801167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.223 [2024-12-06 01:14:54.801204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.223 [2024-12-06 01:14:54.801227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.223 [2024-12-06 01:14:54.801513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.223 [2024-12-06 01:14:54.801751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.223 [2024-12-06 01:14:54.801776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.223 [2024-12-06 01:14:54.801809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.223 [2024-12-06 01:14:54.801839] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.223 [2024-12-06 01:14:54.814515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.223 [2024-12-06 01:14:54.814982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.223 [2024-12-06 01:14:54.815018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.223 [2024-12-06 01:14:54.815042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.223 [2024-12-06 01:14:54.815326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.223 [2024-12-06 01:14:54.815564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.223 [2024-12-06 01:14:54.815590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.223 [2024-12-06 01:14:54.815608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.223 [2024-12-06 01:14:54.815626] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.223 [2024-12-06 01:14:54.828470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.223 [2024-12-06 01:14:54.828938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.223 [2024-12-06 01:14:54.828975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.224 [2024-12-06 01:14:54.828999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.224 [2024-12-06 01:14:54.829286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.224 [2024-12-06 01:14:54.829523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.224 [2024-12-06 01:14:54.829549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.224 [2024-12-06 01:14:54.829572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.224 [2024-12-06 01:14:54.829591] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.224 [2024-12-06 01:14:54.842337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.224 [2024-12-06 01:14:54.842733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.224 [2024-12-06 01:14:54.842783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.224 [2024-12-06 01:14:54.842829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.224 [2024-12-06 01:14:54.843119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.224 [2024-12-06 01:14:54.843393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.224 [2024-12-06 01:14:54.843419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.224 [2024-12-06 01:14:54.843437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.224 [2024-12-06 01:14:54.843454] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.224 [2024-12-06 01:14:54.856246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.224 [2024-12-06 01:14:54.856810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.224 [2024-12-06 01:14:54.856870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.224 [2024-12-06 01:14:54.856895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.224 [2024-12-06 01:14:54.857181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.224 [2024-12-06 01:14:54.857420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.224 [2024-12-06 01:14:54.857446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.224 [2024-12-06 01:14:54.857464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.224 [2024-12-06 01:14:54.857482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.224 [2024-12-06 01:14:54.870161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.224 [2024-12-06 01:14:54.870564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.224 [2024-12-06 01:14:54.870601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.224 [2024-12-06 01:14:54.870624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.224 [2024-12-06 01:14:54.870909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.224 [2024-12-06 01:14:54.871190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.224 [2024-12-06 01:14:54.871215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.224 [2024-12-06 01:14:54.871234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.224 [2024-12-06 01:14:54.871257] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.224 [2024-12-06 01:14:54.883877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.224 [2024-12-06 01:14:54.884310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.224 [2024-12-06 01:14:54.884362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.224 [2024-12-06 01:14:54.884386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.224 [2024-12-06 01:14:54.884668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.224 [2024-12-06 01:14:54.884953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.224 [2024-12-06 01:14:54.884982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.224 [2024-12-06 01:14:54.885001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.224 [2024-12-06 01:14:54.885021] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.224 [2024-12-06 01:14:54.897603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.224 [2024-12-06 01:14:54.898080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.224 [2024-12-06 01:14:54.898117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.224 [2024-12-06 01:14:54.898140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.224 [2024-12-06 01:14:54.898426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.224 [2024-12-06 01:14:54.898664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.224 [2024-12-06 01:14:54.898724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.224 [2024-12-06 01:14:54.898743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.224 [2024-12-06 01:14:54.898776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.224 [2024-12-06 01:14:54.911485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.224 [2024-12-06 01:14:54.911857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.224 [2024-12-06 01:14:54.911915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.224 [2024-12-06 01:14:54.911939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.224 [2024-12-06 01:14:54.912225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.224 [2024-12-06 01:14:54.912464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.224 [2024-12-06 01:14:54.912490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.224 [2024-12-06 01:14:54.912508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.224 [2024-12-06 01:14:54.912525] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.224 [2024-12-06 01:14:54.925441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.224 [2024-12-06 01:14:54.925894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.224 [2024-12-06 01:14:54.925933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.224 [2024-12-06 01:14:54.925956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.224 [2024-12-06 01:14:54.926243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.224 [2024-12-06 01:14:54.926481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.225 [2024-12-06 01:14:54.926507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.225 [2024-12-06 01:14:54.926525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.225 [2024-12-06 01:14:54.926544] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.485 [2024-12-06 01:14:54.939568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.485 [2024-12-06 01:14:54.940024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.485 [2024-12-06 01:14:54.940068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.485 [2024-12-06 01:14:54.940094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.485 [2024-12-06 01:14:54.940376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.485 [2024-12-06 01:14:54.940616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.485 [2024-12-06 01:14:54.940641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.485 [2024-12-06 01:14:54.940660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.485 [2024-12-06 01:14:54.940678] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.485 [2024-12-06 01:14:54.953502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.485 [2024-12-06 01:14:54.953954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.485 [2024-12-06 01:14:54.954005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.485 [2024-12-06 01:14:54.954030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.485 [2024-12-06 01:14:54.954317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.485 [2024-12-06 01:14:54.954560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.485 [2024-12-06 01:14:54.954586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.485 [2024-12-06 01:14:54.954604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.485 [2024-12-06 01:14:54.954622] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.485 [2024-12-06 01:14:54.967404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.485 [2024-12-06 01:14:54.967882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.485 [2024-12-06 01:14:54.967920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.485 [2024-12-06 01:14:54.967949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.485 [2024-12-06 01:14:54.968237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.485 [2024-12-06 01:14:54.968498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.485 [2024-12-06 01:14:54.968523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.485 [2024-12-06 01:14:54.968541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.485 [2024-12-06 01:14:54.968558] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.485 [2024-12-06 01:14:54.981283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.485 [2024-12-06 01:14:54.981683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.485 [2024-12-06 01:14:54.981720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.485 [2024-12-06 01:14:54.981743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.485 [2024-12-06 01:14:54.982040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.485 [2024-12-06 01:14:54.982300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.485 [2024-12-06 01:14:54.982326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.485 [2024-12-06 01:14:54.982344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.485 [2024-12-06 01:14:54.982362] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.485 [2024-12-06 01:14:54.994980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.485 [2024-12-06 01:14:54.995470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.485 [2024-12-06 01:14:54.995507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.485 [2024-12-06 01:14:54.995530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.485 [2024-12-06 01:14:54.995837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.485 [2024-12-06 01:14:54.996129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.485 [2024-12-06 01:14:54.996158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.485 [2024-12-06 01:14:54.996177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.485 [2024-12-06 01:14:54.996210] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.485 [2024-12-06 01:14:55.008887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.485 [2024-12-06 01:14:55.009312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.485 [2024-12-06 01:14:55.009348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.485 [2024-12-06 01:14:55.009372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.485 [2024-12-06 01:14:55.009654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.485 [2024-12-06 01:14:55.009931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.485 [2024-12-06 01:14:55.009959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.485 [2024-12-06 01:14:55.009979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.485 [2024-12-06 01:14:55.009998] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.485 [2024-12-06 01:14:55.022669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.485 [2024-12-06 01:14:55.023112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.485 [2024-12-06 01:14:55.023149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.485 [2024-12-06 01:14:55.023174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.485 [2024-12-06 01:14:55.023457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.485 [2024-12-06 01:14:55.023695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.485 [2024-12-06 01:14:55.023720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.485 [2024-12-06 01:14:55.023739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.485 [2024-12-06 01:14:55.023756] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.485 [2024-12-06 01:14:55.036430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.485 [2024-12-06 01:14:55.036902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.485 [2024-12-06 01:14:55.036940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.485 [2024-12-06 01:14:55.036963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.485 [2024-12-06 01:14:55.037249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.485 [2024-12-06 01:14:55.037487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.485 [2024-12-06 01:14:55.037513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.485 [2024-12-06 01:14:55.037531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.485 [2024-12-06 01:14:55.037548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.485 [2024-12-06 01:14:55.050342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.485 [2024-12-06 01:14:55.050752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.486 [2024-12-06 01:14:55.050789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.486 [2024-12-06 01:14:55.050813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.486 [2024-12-06 01:14:55.051097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.486 [2024-12-06 01:14:55.051370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.486 [2024-12-06 01:14:55.051396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.486 [2024-12-06 01:14:55.051419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.486 [2024-12-06 01:14:55.051437] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.486 [2024-12-06 01:14:55.064097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.486 [2024-12-06 01:14:55.064538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.486 [2024-12-06 01:14:55.064574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.486 [2024-12-06 01:14:55.064598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.486 [2024-12-06 01:14:55.064910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.486 [2024-12-06 01:14:55.065193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.486 [2024-12-06 01:14:55.065219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.486 [2024-12-06 01:14:55.065238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.486 [2024-12-06 01:14:55.065255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.486 [2024-12-06 01:14:55.077926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.486 [2024-12-06 01:14:55.078418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.486 [2024-12-06 01:14:55.078454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.486 [2024-12-06 01:14:55.078479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.486 [2024-12-06 01:14:55.078763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.486 [2024-12-06 01:14:55.079063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.486 [2024-12-06 01:14:55.079093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.486 [2024-12-06 01:14:55.079127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.486 [2024-12-06 01:14:55.079147] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.486 [2024-12-06 01:14:55.091738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.486 [2024-12-06 01:14:55.092175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.486 [2024-12-06 01:14:55.092210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.486 [2024-12-06 01:14:55.092249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.486 [2024-12-06 01:14:55.092534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.486 [2024-12-06 01:14:55.092773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.486 [2024-12-06 01:14:55.092822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.486 [2024-12-06 01:14:55.092844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.486 [2024-12-06 01:14:55.092892] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.486 [2024-12-06 01:14:55.105522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.486 [2024-12-06 01:14:55.105987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.486 [2024-12-06 01:14:55.106024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.486 [2024-12-06 01:14:55.106048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.486 [2024-12-06 01:14:55.106331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.486 [2024-12-06 01:14:55.106569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.486 [2024-12-06 01:14:55.106595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.486 [2024-12-06 01:14:55.106613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.486 [2024-12-06 01:14:55.106630] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.486 [2024-12-06 01:14:55.119415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.486 [2024-12-06 01:14:55.119847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.486 [2024-12-06 01:14:55.119889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.486 [2024-12-06 01:14:55.119913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.486 [2024-12-06 01:14:55.120199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.486 [2024-12-06 01:14:55.120438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.486 [2024-12-06 01:14:55.120464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.486 [2024-12-06 01:14:55.120483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.486 [2024-12-06 01:14:55.120500] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.486 [2024-12-06 01:14:55.133285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.486 [2024-12-06 01:14:55.133679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.486 [2024-12-06 01:14:55.133732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.486 [2024-12-06 01:14:55.133770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.486 [2024-12-06 01:14:55.134056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.486 [2024-12-06 01:14:55.134337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.486 [2024-12-06 01:14:55.134380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.486 [2024-12-06 01:14:55.134399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.486 [2024-12-06 01:14:55.134417] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.486 [2024-12-06 01:14:55.147154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.486 [2024-12-06 01:14:55.147687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.486 [2024-12-06 01:14:55.147745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.486 [2024-12-06 01:14:55.147770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.486 [2024-12-06 01:14:55.148066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.486 [2024-12-06 01:14:55.148322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.486 [2024-12-06 01:14:55.148348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.486 [2024-12-06 01:14:55.148367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.486 [2024-12-06 01:14:55.148385] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.486 [2024-12-06 01:14:55.161336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.486 [2024-12-06 01:14:55.161779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.486 [2024-12-06 01:14:55.161828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.486 [2024-12-06 01:14:55.161855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.486 [2024-12-06 01:14:55.162146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.486 [2024-12-06 01:14:55.162439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.486 [2024-12-06 01:14:55.162467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.486 [2024-12-06 01:14:55.162486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.486 [2024-12-06 01:14:55.162505] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.486 [2024-12-06 01:14:55.175383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.486 [2024-12-06 01:14:55.175860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.486 [2024-12-06 01:14:55.175897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.486 [2024-12-06 01:14:55.175921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.486 [2024-12-06 01:14:55.176207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.486 [2024-12-06 01:14:55.176447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.486 [2024-12-06 01:14:55.176473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.486 [2024-12-06 01:14:55.176491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.486 [2024-12-06 01:14:55.176508] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.486 [2024-12-06 01:14:55.189582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.487 [2024-12-06 01:14:55.190016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.487 [2024-12-06 01:14:55.190053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.487 [2024-12-06 01:14:55.190076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.487 [2024-12-06 01:14:55.190342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.487 [2024-12-06 01:14:55.190622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.487 [2024-12-06 01:14:55.190650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.487 [2024-12-06 01:14:55.190669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.487 [2024-12-06 01:14:55.190688] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.746 [2024-12-06 01:14:55.203440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.746 [2024-12-06 01:14:55.203877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.746 [2024-12-06 01:14:55.203914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.746 [2024-12-06 01:14:55.203937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.746 [2024-12-06 01:14:55.204220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.746 [2024-12-06 01:14:55.204459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.746 [2024-12-06 01:14:55.204484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.746 [2024-12-06 01:14:55.204503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.746 [2024-12-06 01:14:55.204520] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.746 [2024-12-06 01:14:55.217191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.746 [2024-12-06 01:14:55.217632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.746 [2024-12-06 01:14:55.217669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.746 [2024-12-06 01:14:55.217693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.746 [2024-12-06 01:14:55.217978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.746 [2024-12-06 01:14:55.218258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.746 [2024-12-06 01:14:55.218284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.746 [2024-12-06 01:14:55.218302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.746 [2024-12-06 01:14:55.218321] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.746 [2024-12-06 01:14:55.231064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.746 [2024-12-06 01:14:55.231572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.746 [2024-12-06 01:14:55.231609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.746 [2024-12-06 01:14:55.231632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.746 [2024-12-06 01:14:55.231917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.746 [2024-12-06 01:14:55.232184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.746 [2024-12-06 01:14:55.232216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.746 [2024-12-06 01:14:55.232235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.746 [2024-12-06 01:14:55.232252] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.746 [2024-12-06 01:14:55.244759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.746 [2024-12-06 01:14:55.245174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.746 [2024-12-06 01:14:55.245211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.746 [2024-12-06 01:14:55.245234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.746 [2024-12-06 01:14:55.245513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.746 [2024-12-06 01:14:55.245753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.746 [2024-12-06 01:14:55.245778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.746 [2024-12-06 01:14:55.245812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.746 [2024-12-06 01:14:55.245844] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.746 [2024-12-06 01:14:55.258509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.746 [2024-12-06 01:14:55.258984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.746 [2024-12-06 01:14:55.259021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.746 [2024-12-06 01:14:55.259044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.746 [2024-12-06 01:14:55.259328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.746 [2024-12-06 01:14:55.259567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.746 [2024-12-06 01:14:55.259593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.746 [2024-12-06 01:14:55.259611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.746 [2024-12-06 01:14:55.259629] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.746 [2024-12-06 01:14:55.272389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.746 [2024-12-06 01:14:55.272891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.746 [2024-12-06 01:14:55.272928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.746 [2024-12-06 01:14:55.272951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.746 [2024-12-06 01:14:55.273241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.746 [2024-12-06 01:14:55.273479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.746 [2024-12-06 01:14:55.273505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.746 [2024-12-06 01:14:55.273523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.746 [2024-12-06 01:14:55.273546] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.746 [2024-12-06 01:14:55.286938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.746 [2024-12-06 01:14:55.287375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.746 [2024-12-06 01:14:55.287416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.746 [2024-12-06 01:14:55.287442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.746 [2024-12-06 01:14:55.287730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.746 [2024-12-06 01:14:55.288035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.746 [2024-12-06 01:14:55.288067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.746 [2024-12-06 01:14:55.288089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.746 [2024-12-06 01:14:55.288111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.746 [2024-12-06 01:14:55.301605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.746 [2024-12-06 01:14:55.302100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.746 [2024-12-06 01:14:55.302142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.746 [2024-12-06 01:14:55.302168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.746 [2024-12-06 01:14:55.302456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.746 [2024-12-06 01:14:55.302748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.746 [2024-12-06 01:14:55.302779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.746 [2024-12-06 01:14:55.302801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.747 [2024-12-06 01:14:55.302834] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.747 [2024-12-06 01:14:55.316116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.747 [2024-12-06 01:14:55.316577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.747 [2024-12-06 01:14:55.316618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.747 [2024-12-06 01:14:55.316644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.747 [2024-12-06 01:14:55.316945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.747 [2024-12-06 01:14:55.317236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.747 [2024-12-06 01:14:55.317268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.747 [2024-12-06 01:14:55.317290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.747 [2024-12-06 01:14:55.317312] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.747 [2024-12-06 01:14:55.330805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.747 [2024-12-06 01:14:55.331267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.747 [2024-12-06 01:14:55.331308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.747 [2024-12-06 01:14:55.331335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.747 [2024-12-06 01:14:55.331622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.747 [2024-12-06 01:14:55.331925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.747 [2024-12-06 01:14:55.331958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.747 [2024-12-06 01:14:55.331981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.747 [2024-12-06 01:14:55.332003] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.747 [2024-12-06 01:14:55.345288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.747 [2024-12-06 01:14:55.345723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.747 [2024-12-06 01:14:55.345763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.747 [2024-12-06 01:14:55.345790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.747 [2024-12-06 01:14:55.346088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.747 [2024-12-06 01:14:55.346379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.747 [2024-12-06 01:14:55.346410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.747 [2024-12-06 01:14:55.346432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.747 [2024-12-06 01:14:55.346454] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.747 [2024-12-06 01:14:55.359956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.747 [2024-12-06 01:14:55.360389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.747 [2024-12-06 01:14:55.360430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.747 [2024-12-06 01:14:55.360456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.747 [2024-12-06 01:14:55.360744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.747 [2024-12-06 01:14:55.361046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.747 [2024-12-06 01:14:55.361079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.747 [2024-12-06 01:14:55.361101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.747 [2024-12-06 01:14:55.361123] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.747 [2024-12-06 01:14:55.374593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.747 [2024-12-06 01:14:55.375081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.747 [2024-12-06 01:14:55.375123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.747 [2024-12-06 01:14:55.375156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.747 [2024-12-06 01:14:55.375445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.747 [2024-12-06 01:14:55.375736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.747 [2024-12-06 01:14:55.375767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.747 [2024-12-06 01:14:55.375789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.747 [2024-12-06 01:14:55.375810] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.747 [2024-12-06 01:14:55.389294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.747 [2024-12-06 01:14:55.389795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.747 [2024-12-06 01:14:55.389845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.747 [2024-12-06 01:14:55.389872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.747 [2024-12-06 01:14:55.390159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.747 [2024-12-06 01:14:55.390449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.747 [2024-12-06 01:14:55.390481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.747 [2024-12-06 01:14:55.390503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.747 [2024-12-06 01:14:55.390524] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.747 [2024-12-06 01:14:55.404017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.747 [2024-12-06 01:14:55.404474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.747 [2024-12-06 01:14:55.404515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.747 [2024-12-06 01:14:55.404541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.747 [2024-12-06 01:14:55.404837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.747 [2024-12-06 01:14:55.405127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.747 [2024-12-06 01:14:55.405160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.747 [2024-12-06 01:14:55.405183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.747 [2024-12-06 01:14:55.405204] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.747 [2024-12-06 01:14:55.418701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.747 [2024-12-06 01:14:55.419158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.747 [2024-12-06 01:14:55.419200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.747 [2024-12-06 01:14:55.419226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.747 [2024-12-06 01:14:55.419517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.747 [2024-12-06 01:14:55.419807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.747 [2024-12-06 01:14:55.419850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.747 [2024-12-06 01:14:55.419873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.747 [2024-12-06 01:14:55.419895] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.747 [2024-12-06 01:14:55.433408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.747 [2024-12-06 01:14:55.433898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.747 [2024-12-06 01:14:55.433940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.747 [2024-12-06 01:14:55.433967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.747 [2024-12-06 01:14:55.434254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.747 [2024-12-06 01:14:55.434545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.747 [2024-12-06 01:14:55.434576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.747 [2024-12-06 01:14:55.434598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.747 [2024-12-06 01:14:55.434620] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.747 [2024-12-06 01:14:55.447898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.747 [2024-12-06 01:14:55.448342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.747 [2024-12-06 01:14:55.448384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.747 [2024-12-06 01:14:55.448410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.748 [2024-12-06 01:14:55.448697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.748 [2024-12-06 01:14:55.449003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.748 [2024-12-06 01:14:55.449036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.748 [2024-12-06 01:14:55.449058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.748 [2024-12-06 01:14:55.449079] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.006 [2024-12-06 01:14:55.462551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.006 [2024-12-06 01:14:55.463022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.006 [2024-12-06 01:14:55.463063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.006 [2024-12-06 01:14:55.463088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.006 [2024-12-06 01:14:55.463374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.006 [2024-12-06 01:14:55.463663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.006 [2024-12-06 01:14:55.463701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.006 [2024-12-06 01:14:55.463724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.006 [2024-12-06 01:14:55.463746] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.006 [2024-12-06 01:14:55.477213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.006 [2024-12-06 01:14:55.477661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.006 [2024-12-06 01:14:55.477702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.006 [2024-12-06 01:14:55.477727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.006 [2024-12-06 01:14:55.478026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.006 [2024-12-06 01:14:55.478316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.006 [2024-12-06 01:14:55.478348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.006 [2024-12-06 01:14:55.478370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.006 [2024-12-06 01:14:55.478392] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.006 [2024-12-06 01:14:55.491862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.006 [2024-12-06 01:14:55.492343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.006 [2024-12-06 01:14:55.492384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.006 [2024-12-06 01:14:55.492410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.006 [2024-12-06 01:14:55.492694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.006 [2024-12-06 01:14:55.492997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.006 [2024-12-06 01:14:55.493030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.006 [2024-12-06 01:14:55.493053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.006 [2024-12-06 01:14:55.493075] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.006 [2024-12-06 01:14:55.506516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.006 [2024-12-06 01:14:55.506996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.006 [2024-12-06 01:14:55.507036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.006 [2024-12-06 01:14:55.507078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.006 [2024-12-06 01:14:55.507366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.006 [2024-12-06 01:14:55.507656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.006 [2024-12-06 01:14:55.507687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.006 [2024-12-06 01:14:55.507709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.006 [2024-12-06 01:14:55.507738] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.006 [2024-12-06 01:14:55.521265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.006 [2024-12-06 01:14:55.521732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.006 [2024-12-06 01:14:55.521773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.006 [2024-12-06 01:14:55.521799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.006 [2024-12-06 01:14:55.522099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.006 [2024-12-06 01:14:55.522389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.006 [2024-12-06 01:14:55.522420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.006 [2024-12-06 01:14:55.522443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.006 [2024-12-06 01:14:55.522464] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.007 [2024-12-06 01:14:55.535865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.007 [2024-12-06 01:14:55.536333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.007 [2024-12-06 01:14:55.536374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.007 [2024-12-06 01:14:55.536400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.007 [2024-12-06 01:14:55.536685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.007 [2024-12-06 01:14:55.536987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.007 [2024-12-06 01:14:55.537019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.007 [2024-12-06 01:14:55.537041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.007 [2024-12-06 01:14:55.537063] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.007 [2024-12-06 01:14:55.550446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.007 [2024-12-06 01:14:55.550944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.007 [2024-12-06 01:14:55.551001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.007 [2024-12-06 01:14:55.551027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.007 [2024-12-06 01:14:55.551312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.007 [2024-12-06 01:14:55.551601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.007 [2024-12-06 01:14:55.551632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.007 [2024-12-06 01:14:55.551654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.007 [2024-12-06 01:14:55.551677] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.007 [2024-12-06 01:14:55.565094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.007 [2024-12-06 01:14:55.565609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.007 [2024-12-06 01:14:55.565668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.007 [2024-12-06 01:14:55.565695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.007 [2024-12-06 01:14:55.565994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.007 [2024-12-06 01:14:55.566283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.007 [2024-12-06 01:14:55.566314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.007 [2024-12-06 01:14:55.566337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.007 [2024-12-06 01:14:55.566358] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.007 [2024-12-06 01:14:55.579766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.007 [2024-12-06 01:14:55.580300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.007 [2024-12-06 01:14:55.580359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.007 [2024-12-06 01:14:55.580385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.007 [2024-12-06 01:14:55.580670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.007 [2024-12-06 01:14:55.580973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.007 [2024-12-06 01:14:55.581005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.007 [2024-12-06 01:14:55.581027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.007 [2024-12-06 01:14:55.581048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.007 [2024-12-06 01:14:55.594221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.007 [2024-12-06 01:14:55.594652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.007 [2024-12-06 01:14:55.594694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.007 [2024-12-06 01:14:55.594720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.007 [2024-12-06 01:14:55.595018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.007 [2024-12-06 01:14:55.595307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.007 [2024-12-06 01:14:55.595339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.007 [2024-12-06 01:14:55.595362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.007 [2024-12-06 01:14:55.595383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.007 [2024-12-06 01:14:55.608774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.007 [2024-12-06 01:14:55.609327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.007 [2024-12-06 01:14:55.609368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.007 [2024-12-06 01:14:55.609400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.007 [2024-12-06 01:14:55.609689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.007 [2024-12-06 01:14:55.609992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.007 [2024-12-06 01:14:55.610025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.007 [2024-12-06 01:14:55.610047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.007 [2024-12-06 01:14:55.610069] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.007 [2024-12-06 01:14:55.623276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.007 [2024-12-06 01:14:55.623736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.007 [2024-12-06 01:14:55.623778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.007 [2024-12-06 01:14:55.623804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.007 [2024-12-06 01:14:55.624103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.007 [2024-12-06 01:14:55.624393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.007 [2024-12-06 01:14:55.624424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.007 [2024-12-06 01:14:55.624446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.007 [2024-12-06 01:14:55.624468] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.007 [2024-12-06 01:14:55.637903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.007 [2024-12-06 01:14:55.638368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.007 [2024-12-06 01:14:55.638409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.007 [2024-12-06 01:14:55.638435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.007 [2024-12-06 01:14:55.638721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.007 [2024-12-06 01:14:55.639022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.007 [2024-12-06 01:14:55.639054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.007 [2024-12-06 01:14:55.639076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.007 [2024-12-06 01:14:55.639098] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.007 [2024-12-06 01:14:55.652528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.007 [2024-12-06 01:14:55.653001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.007 [2024-12-06 01:14:55.653043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.007 [2024-12-06 01:14:55.653070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.007 [2024-12-06 01:14:55.653355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.007 [2024-12-06 01:14:55.653652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.007 [2024-12-06 01:14:55.653684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.007 [2024-12-06 01:14:55.653706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.007 [2024-12-06 01:14:55.653727] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.007 [2024-12-06 01:14:55.667130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.007 [2024-12-06 01:14:55.667600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.007 [2024-12-06 01:14:55.667641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.007 [2024-12-06 01:14:55.667667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.007 [2024-12-06 01:14:55.667963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.007 [2024-12-06 01:14:55.668253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.007 [2024-12-06 01:14:55.668284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.008 [2024-12-06 01:14:55.668306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.008 [2024-12-06 01:14:55.668328] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.008 [2024-12-06 01:14:55.681760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.008 [2024-12-06 01:14:55.682218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.008 [2024-12-06 01:14:55.682260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.008 [2024-12-06 01:14:55.682286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.008 [2024-12-06 01:14:55.682574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.008 [2024-12-06 01:14:55.682877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.008 [2024-12-06 01:14:55.682909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.008 [2024-12-06 01:14:55.682932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.008 [2024-12-06 01:14:55.682954] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.008 [2024-12-06 01:14:55.696384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.008 [2024-12-06 01:14:55.696903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.008 [2024-12-06 01:14:55.696945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.008 [2024-12-06 01:14:55.696971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.008 [2024-12-06 01:14:55.697259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.008 [2024-12-06 01:14:55.697549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.008 [2024-12-06 01:14:55.697580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.008 [2024-12-06 01:14:55.697610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.008 [2024-12-06 01:14:55.697633] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.008 [2024-12-06 01:14:55.711068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.008 [2024-12-06 01:14:55.711509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.008 [2024-12-06 01:14:55.711551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.008 [2024-12-06 01:14:55.711578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.008 [2024-12-06 01:14:55.711877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.266 3360.50 IOPS, 13.13 MiB/s [2024-12-06T00:14:55.975Z] [2024-12-06 01:14:55.714004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.266 [2024-12-06 01:14:55.714033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.266 [2024-12-06 01:14:55.714056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.267 [2024-12-06 01:14:55.714077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.267 [2024-12-06 01:14:55.725526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.267 [2024-12-06 01:14:55.726001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.267 [2024-12-06 01:14:55.726043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.267 [2024-12-06 01:14:55.726068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.267 [2024-12-06 01:14:55.726356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.267 [2024-12-06 01:14:55.726645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.267 [2024-12-06 01:14:55.726676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.267 [2024-12-06 01:14:55.726698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.267 [2024-12-06 01:14:55.726720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.267 [2024-12-06 01:14:55.740113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.267 [2024-12-06 01:14:55.740568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.267 [2024-12-06 01:14:55.740609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.267 [2024-12-06 01:14:55.740635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.267 [2024-12-06 01:14:55.740931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.267 [2024-12-06 01:14:55.741220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.267 [2024-12-06 01:14:55.741251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.267 [2024-12-06 01:14:55.741274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.267 [2024-12-06 01:14:55.741295] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.267 [2024-12-06 01:14:55.754747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.267 [2024-12-06 01:14:55.755226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.267 [2024-12-06 01:14:55.755267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.267 [2024-12-06 01:14:55.755292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.267 [2024-12-06 01:14:55.755578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.267 [2024-12-06 01:14:55.755884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.267 [2024-12-06 01:14:55.755916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.267 [2024-12-06 01:14:55.755939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.267 [2024-12-06 01:14:55.755960] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.267 [2024-12-06 01:14:55.769386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.267 [2024-12-06 01:14:55.769876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.267 [2024-12-06 01:14:55.769919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.267 [2024-12-06 01:14:55.769948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.267 [2024-12-06 01:14:55.770234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.267 [2024-12-06 01:14:55.770523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.267 [2024-12-06 01:14:55.770554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.267 [2024-12-06 01:14:55.770576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.267 [2024-12-06 01:14:55.770598] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.267 [2024-12-06 01:14:55.784007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.267 [2024-12-06 01:14:55.784431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.267 [2024-12-06 01:14:55.784472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.267 [2024-12-06 01:14:55.784498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.267 [2024-12-06 01:14:55.784784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.267 [2024-12-06 01:14:55.785086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.267 [2024-12-06 01:14:55.785119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.267 [2024-12-06 01:14:55.785141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.267 [2024-12-06 01:14:55.785162] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.267 [2024-12-06 01:14:55.798582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.267 [2024-12-06 01:14:55.799064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.267 [2024-12-06 01:14:55.799106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.267 [2024-12-06 01:14:55.799131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.267 [2024-12-06 01:14:55.799418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.267 [2024-12-06 01:14:55.799709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.267 [2024-12-06 01:14:55.799740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.267 [2024-12-06 01:14:55.799762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.267 [2024-12-06 01:14:55.799782] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.267 [2024-12-06 01:14:55.813213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.267 [2024-12-06 01:14:55.813641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.267 [2024-12-06 01:14:55.813682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.267 [2024-12-06 01:14:55.813708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.267 [2024-12-06 01:14:55.814009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.267 [2024-12-06 01:14:55.814299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.267 [2024-12-06 01:14:55.814330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.267 [2024-12-06 01:14:55.814352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.267 [2024-12-06 01:14:55.814373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.267 [2024-12-06 01:14:55.827810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.267 [2024-12-06 01:14:55.828283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.267 [2024-12-06 01:14:55.828325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.267 [2024-12-06 01:14:55.828351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.267 [2024-12-06 01:14:55.828639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.267 [2024-12-06 01:14:55.828940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.267 [2024-12-06 01:14:55.828972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.267 [2024-12-06 01:14:55.828994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.267 [2024-12-06 01:14:55.829017] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.267 [2024-12-06 01:14:55.842425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.267 [2024-12-06 01:14:55.842890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.267 [2024-12-06 01:14:55.842931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.267 [2024-12-06 01:14:55.842957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.267 [2024-12-06 01:14:55.843253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.267 [2024-12-06 01:14:55.843551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.267 [2024-12-06 01:14:55.843582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.267 [2024-12-06 01:14:55.843604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.267 [2024-12-06 01:14:55.843627] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.267 [2024-12-06 01:14:55.857045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.267 [2024-12-06 01:14:55.857506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.268 [2024-12-06 01:14:55.857547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.268 [2024-12-06 01:14:55.857573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.268 [2024-12-06 01:14:55.857872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.268 [2024-12-06 01:14:55.858161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.268 [2024-12-06 01:14:55.858192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.268 [2024-12-06 01:14:55.858214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.268 [2024-12-06 01:14:55.858234] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.268 [2024-12-06 01:14:55.871652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.268 [2024-12-06 01:14:55.872115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.268 [2024-12-06 01:14:55.872156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.268 [2024-12-06 01:14:55.872182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.268 [2024-12-06 01:14:55.872468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.268 [2024-12-06 01:14:55.872757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.268 [2024-12-06 01:14:55.872788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.268 [2024-12-06 01:14:55.872810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.268 [2024-12-06 01:14:55.872860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.268 [2024-12-06 01:14:55.886283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.268 [2024-12-06 01:14:55.886787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.268 [2024-12-06 01:14:55.886840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.268 [2024-12-06 01:14:55.886868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.268 [2024-12-06 01:14:55.887157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.268 [2024-12-06 01:14:55.887453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.268 [2024-12-06 01:14:55.887485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.268 [2024-12-06 01:14:55.887507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.268 [2024-12-06 01:14:55.887528] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.268 [2024-12-06 01:14:55.900968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.268 [2024-12-06 01:14:55.901420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.268 [2024-12-06 01:14:55.901462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.268 [2024-12-06 01:14:55.901488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.268 [2024-12-06 01:14:55.901773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.268 [2024-12-06 01:14:55.902074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.268 [2024-12-06 01:14:55.902107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.268 [2024-12-06 01:14:55.902130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.268 [2024-12-06 01:14:55.902151] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.268 [2024-12-06 01:14:55.915577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.268 [2024-12-06 01:14:55.916032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.268 [2024-12-06 01:14:55.916074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.268 [2024-12-06 01:14:55.916100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.268 [2024-12-06 01:14:55.916388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.268 [2024-12-06 01:14:55.916678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.268 [2024-12-06 01:14:55.916709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.268 [2024-12-06 01:14:55.916748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.268 [2024-12-06 01:14:55.916771] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.268 [2024-12-06 01:14:55.930230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.268 [2024-12-06 01:14:55.930699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.268 [2024-12-06 01:14:55.930741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.268 [2024-12-06 01:14:55.930768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.268 [2024-12-06 01:14:55.931067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.268 [2024-12-06 01:14:55.931358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.268 [2024-12-06 01:14:55.931389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.268 [2024-12-06 01:14:55.931417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.268 [2024-12-06 01:14:55.931440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.268 [2024-12-06 01:14:55.944695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.268 [2024-12-06 01:14:55.945112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.268 [2024-12-06 01:14:55.945155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.268 [2024-12-06 01:14:55.945181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.268 [2024-12-06 01:14:55.945469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.268 [2024-12-06 01:14:55.945758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.268 [2024-12-06 01:14:55.945790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.268 [2024-12-06 01:14:55.945812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.268 [2024-12-06 01:14:55.945849] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.268 [2024-12-06 01:14:55.959265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.268 [2024-12-06 01:14:55.959744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.268 [2024-12-06 01:14:55.959785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.268 [2024-12-06 01:14:55.959810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.268 [2024-12-06 01:14:55.960110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.268 [2024-12-06 01:14:55.960399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.268 [2024-12-06 01:14:55.960430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.268 [2024-12-06 01:14:55.960452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.268 [2024-12-06 01:14:55.960474] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.527 [2024-12-06 01:14:55.973916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.527 [2024-12-06 01:14:55.974344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.527 [2024-12-06 01:14:55.974384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.527 [2024-12-06 01:14:55.974410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.527 [2024-12-06 01:14:55.974695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.527 [2024-12-06 01:14:55.974997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.527 [2024-12-06 01:14:55.975028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.527 [2024-12-06 01:14:55.975051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.527 [2024-12-06 01:14:55.975073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.527 [2024-12-06 01:14:55.988485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.527 [2024-12-06 01:14:55.988938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.527 [2024-12-06 01:14:55.988980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.527 [2024-12-06 01:14:55.989006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.527 [2024-12-06 01:14:55.989293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.527 [2024-12-06 01:14:55.989582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.527 [2024-12-06 01:14:55.989613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.527 [2024-12-06 01:14:55.989634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.527 [2024-12-06 01:14:55.989656] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.527 [2024-12-06 01:14:56.003077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.527 [2024-12-06 01:14:56.003548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.527 [2024-12-06 01:14:56.003590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.527 [2024-12-06 01:14:56.003616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.527 [2024-12-06 01:14:56.003918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.527 [2024-12-06 01:14:56.004208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.527 [2024-12-06 01:14:56.004239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.527 [2024-12-06 01:14:56.004261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.527 [2024-12-06 01:14:56.004281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.527 [2024-12-06 01:14:56.017689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.527 [2024-12-06 01:14:56.018155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.527 [2024-12-06 01:14:56.018197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.527 [2024-12-06 01:14:56.018222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.528 [2024-12-06 01:14:56.018508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.528 [2024-12-06 01:14:56.018797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.528 [2024-12-06 01:14:56.018841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.528 [2024-12-06 01:14:56.018865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.528 [2024-12-06 01:14:56.018887] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.528 [2024-12-06 01:14:56.032298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.528 [2024-12-06 01:14:56.032769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.528 [2024-12-06 01:14:56.032826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.528 [2024-12-06 01:14:56.032855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.528 [2024-12-06 01:14:56.033141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.528 [2024-12-06 01:14:56.033431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.528 [2024-12-06 01:14:56.033462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.528 [2024-12-06 01:14:56.033485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.528 [2024-12-06 01:14:56.033506] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.528 [2024-12-06 01:14:56.046956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.528 [2024-12-06 01:14:56.047424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.528 [2024-12-06 01:14:56.047465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.528 [2024-12-06 01:14:56.047491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.528 [2024-12-06 01:14:56.047778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.528 [2024-12-06 01:14:56.048080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.528 [2024-12-06 01:14:56.048112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.528 [2024-12-06 01:14:56.048134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.528 [2024-12-06 01:14:56.048156] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.528 [2024-12-06 01:14:56.061590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.528 [2024-12-06 01:14:56.062052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.528 [2024-12-06 01:14:56.062094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.528 [2024-12-06 01:14:56.062120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.528 [2024-12-06 01:14:56.062407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.528 [2024-12-06 01:14:56.062697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.528 [2024-12-06 01:14:56.062728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.528 [2024-12-06 01:14:56.062749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.528 [2024-12-06 01:14:56.062770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.528 [2024-12-06 01:14:56.076233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.528 [2024-12-06 01:14:56.076778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.528 [2024-12-06 01:14:56.076828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.528 [2024-12-06 01:14:56.076857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.528 [2024-12-06 01:14:56.077150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.528 [2024-12-06 01:14:56.077439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.528 [2024-12-06 01:14:56.077470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.528 [2024-12-06 01:14:56.077491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.528 [2024-12-06 01:14:56.077512] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.528 [2024-12-06 01:14:56.090920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.528 [2024-12-06 01:14:56.091458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.528 [2024-12-06 01:14:56.091500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.528 [2024-12-06 01:14:56.091526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.528 [2024-12-06 01:14:56.091812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.528 [2024-12-06 01:14:56.092115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.528 [2024-12-06 01:14:56.092147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.528 [2024-12-06 01:14:56.092170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.528 [2024-12-06 01:14:56.092191] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.528 [2024-12-06 01:14:56.105371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.528 [2024-12-06 01:14:56.105893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.528 [2024-12-06 01:14:56.105935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.528 [2024-12-06 01:14:56.105961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.528 [2024-12-06 01:14:56.106246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.528 [2024-12-06 01:14:56.106534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.528 [2024-12-06 01:14:56.106564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.528 [2024-12-06 01:14:56.106587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.528 [2024-12-06 01:14:56.106608] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.528 [2024-12-06 01:14:56.120017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.528 [2024-12-06 01:14:56.120522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.528 [2024-12-06 01:14:56.120563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.528 [2024-12-06 01:14:56.120590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.528 [2024-12-06 01:14:56.120891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.528 [2024-12-06 01:14:56.121179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.528 [2024-12-06 01:14:56.121217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.528 [2024-12-06 01:14:56.121240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.528 [2024-12-06 01:14:56.121261] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.528 [2024-12-06 01:14:56.134770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.528 [2024-12-06 01:14:56.135252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.528 [2024-12-06 01:14:56.135312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.528 [2024-12-06 01:14:56.135358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.528 [2024-12-06 01:14:56.135720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.528 [2024-12-06 01:14:56.136029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.528 [2024-12-06 01:14:56.136063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.528 [2024-12-06 01:14:56.136086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.528 [2024-12-06 01:14:56.136107] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.528 [2024-12-06 01:14:56.149346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.528 [2024-12-06 01:14:56.149823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.528 [2024-12-06 01:14:56.149873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.528 [2024-12-06 01:14:56.149899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.528 [2024-12-06 01:14:56.150187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.528 [2024-12-06 01:14:56.150477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.528 [2024-12-06 01:14:56.150509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.528 [2024-12-06 01:14:56.150530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.528 [2024-12-06 01:14:56.150552] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.528 [2024-12-06 01:14:56.164056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.528 [2024-12-06 01:14:56.164573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.528 [2024-12-06 01:14:56.164615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.529 [2024-12-06 01:14:56.164641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.529 [2024-12-06 01:14:56.164943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.529 [2024-12-06 01:14:56.165234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.529 [2024-12-06 01:14:56.165265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.529 [2024-12-06 01:14:56.165287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.529 [2024-12-06 01:14:56.165314] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.529 [2024-12-06 01:14:56.178630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.529 [2024-12-06 01:14:56.179119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.529 [2024-12-06 01:14:56.179192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.529 [2024-12-06 01:14:56.179218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.529 [2024-12-06 01:14:56.179503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.529 [2024-12-06 01:14:56.179793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.529 [2024-12-06 01:14:56.179838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.529 [2024-12-06 01:14:56.179864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.529 [2024-12-06 01:14:56.179886] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.529 [2024-12-06 01:14:56.193313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.529 [2024-12-06 01:14:56.193776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.529 [2024-12-06 01:14:56.193829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.529 [2024-12-06 01:14:56.193859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.529 [2024-12-06 01:14:56.194148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.529 [2024-12-06 01:14:56.194438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.529 [2024-12-06 01:14:56.194468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.529 [2024-12-06 01:14:56.194490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.529 [2024-12-06 01:14:56.194513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.529 [2024-12-06 01:14:56.207949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.529 [2024-12-06 01:14:56.208408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.529 [2024-12-06 01:14:56.208448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.529 [2024-12-06 01:14:56.208475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.529 [2024-12-06 01:14:56.208760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.529 [2024-12-06 01:14:56.209061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.529 [2024-12-06 01:14:56.209093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.529 [2024-12-06 01:14:56.209115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.529 [2024-12-06 01:14:56.209136] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.529 [2024-12-06 01:14:56.222592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.529 [2024-12-06 01:14:56.223043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.529 [2024-12-06 01:14:56.223086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.529 [2024-12-06 01:14:56.223112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.529 [2024-12-06 01:14:56.223398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.529 [2024-12-06 01:14:56.223688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.529 [2024-12-06 01:14:56.223719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.529 [2024-12-06 01:14:56.223741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.529 [2024-12-06 01:14:56.223762] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.788 [2024-12-06 01:14:56.237227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.788 [2024-12-06 01:14:56.237670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.788 [2024-12-06 01:14:56.237711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.788 [2024-12-06 01:14:56.237736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.788 [2024-12-06 01:14:56.238032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.788 [2024-12-06 01:14:56.238322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.788 [2024-12-06 01:14:56.238353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.788 [2024-12-06 01:14:56.238375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.788 [2024-12-06 01:14:56.238396] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.788 [2024-12-06 01:14:56.251825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.788 [2024-12-06 01:14:56.252275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.788 [2024-12-06 01:14:56.252316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.789 [2024-12-06 01:14:56.252342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.789 [2024-12-06 01:14:56.252628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.789 [2024-12-06 01:14:56.252930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.789 [2024-12-06 01:14:56.252962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.789 [2024-12-06 01:14:56.252985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.789 [2024-12-06 01:14:56.253007] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.789 [2024-12-06 01:14:56.266406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.789 [2024-12-06 01:14:56.266870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.789 [2024-12-06 01:14:56.266911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.789 [2024-12-06 01:14:56.266944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.789 [2024-12-06 01:14:56.267232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.789 [2024-12-06 01:14:56.267521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.789 [2024-12-06 01:14:56.267552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.789 [2024-12-06 01:14:56.267574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.789 [2024-12-06 01:14:56.267595] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.789 [2024-12-06 01:14:56.281028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.789 [2024-12-06 01:14:56.281490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.789 [2024-12-06 01:14:56.281531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.789 [2024-12-06 01:14:56.281557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.789 [2024-12-06 01:14:56.281857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.789 [2024-12-06 01:14:56.282148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.789 [2024-12-06 01:14:56.282179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.789 [2024-12-06 01:14:56.282201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.789 [2024-12-06 01:14:56.282222] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.789 [2024-12-06 01:14:56.295646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.789 [2024-12-06 01:14:56.296122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.789 [2024-12-06 01:14:56.296164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.789 [2024-12-06 01:14:56.296190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.789 [2024-12-06 01:14:56.296476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.789 [2024-12-06 01:14:56.296766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.789 [2024-12-06 01:14:56.296797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.789 [2024-12-06 01:14:56.296831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.789 [2024-12-06 01:14:56.296856] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.789 [2024-12-06 01:14:56.310293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.789 [2024-12-06 01:14:56.310736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.789 [2024-12-06 01:14:56.310784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.789 [2024-12-06 01:14:56.310810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.789 [2024-12-06 01:14:56.311109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.789 [2024-12-06 01:14:56.311404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.789 [2024-12-06 01:14:56.311435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.789 [2024-12-06 01:14:56.311457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.789 [2024-12-06 01:14:56.311478] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.789 [2024-12-06 01:14:56.324911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.789 [2024-12-06 01:14:56.325340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.789 [2024-12-06 01:14:56.325381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.789 [2024-12-06 01:14:56.325407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.789 [2024-12-06 01:14:56.325693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.789 [2024-12-06 01:14:56.325997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.789 [2024-12-06 01:14:56.326029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.789 [2024-12-06 01:14:56.326051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.789 [2024-12-06 01:14:56.326073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.789 [2024-12-06 01:14:56.339478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.789 [2024-12-06 01:14:56.339945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.789 [2024-12-06 01:14:56.340000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.789 [2024-12-06 01:14:56.340026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.789 [2024-12-06 01:14:56.340313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.789 [2024-12-06 01:14:56.340601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.789 [2024-12-06 01:14:56.340632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.789 [2024-12-06 01:14:56.340654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.789 [2024-12-06 01:14:56.340674] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.789 [2024-12-06 01:14:56.354093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.789 [2024-12-06 01:14:56.354559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.789 [2024-12-06 01:14:56.354600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.789 [2024-12-06 01:14:56.354625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.789 [2024-12-06 01:14:56.354925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.789 [2024-12-06 01:14:56.355214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.789 [2024-12-06 01:14:56.355251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.789 [2024-12-06 01:14:56.355274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.789 [2024-12-06 01:14:56.355295] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.789 [2024-12-06 01:14:56.368711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.789 [2024-12-06 01:14:56.369154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.789 [2024-12-06 01:14:56.369195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.789 [2024-12-06 01:14:56.369221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.789 [2024-12-06 01:14:56.369507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.789 [2024-12-06 01:14:56.369796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.789 [2024-12-06 01:14:56.369841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.789 [2024-12-06 01:14:56.369865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.789 [2024-12-06 01:14:56.369886] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.789 [2024-12-06 01:14:56.383306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.789 [2024-12-06 01:14:56.383778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.789 [2024-12-06 01:14:56.383829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.789 [2024-12-06 01:14:56.383857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.789 [2024-12-06 01:14:56.384141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.789 [2024-12-06 01:14:56.384432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.789 [2024-12-06 01:14:56.384463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.789 [2024-12-06 01:14:56.384485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.789 [2024-12-06 01:14:56.384507] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.789 [2024-12-06 01:14:56.397944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.790 [2024-12-06 01:14:56.398415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.790 [2024-12-06 01:14:56.398456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.790 [2024-12-06 01:14:56.398483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.790 [2024-12-06 01:14:56.398770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.790 [2024-12-06 01:14:56.399071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.790 [2024-12-06 01:14:56.399103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.790 [2024-12-06 01:14:56.399124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.790 [2024-12-06 01:14:56.399153] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.790 [2024-12-06 01:14:56.412569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.790 [2024-12-06 01:14:56.413025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.790 [2024-12-06 01:14:56.413066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.790 [2024-12-06 01:14:56.413092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.790 [2024-12-06 01:14:56.413379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.790 [2024-12-06 01:14:56.413668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.790 [2024-12-06 01:14:56.413699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.790 [2024-12-06 01:14:56.413721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.790 [2024-12-06 01:14:56.413742] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.790 [2024-12-06 01:14:56.427179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.790 [2024-12-06 01:14:56.427723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.790 [2024-12-06 01:14:56.427764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.790 [2024-12-06 01:14:56.427790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.790 [2024-12-06 01:14:56.428088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.790 [2024-12-06 01:14:56.428380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.790 [2024-12-06 01:14:56.428411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.790 [2024-12-06 01:14:56.428433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.790 [2024-12-06 01:14:56.428454] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.790 [2024-12-06 01:14:56.441644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.790 [2024-12-06 01:14:56.442101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.790 [2024-12-06 01:14:56.442141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.790 [2024-12-06 01:14:56.442167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.790 [2024-12-06 01:14:56.442452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.790 [2024-12-06 01:14:56.442741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.790 [2024-12-06 01:14:56.442772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.790 [2024-12-06 01:14:56.442794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.790 [2024-12-06 01:14:56.442827] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.790 [2024-12-06 01:14:56.456286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.790 [2024-12-06 01:14:56.456728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.790 [2024-12-06 01:14:56.456770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.790 [2024-12-06 01:14:56.456797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.790 [2024-12-06 01:14:56.457096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.790 [2024-12-06 01:14:56.457387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.790 [2024-12-06 01:14:56.457419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.790 [2024-12-06 01:14:56.457441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.790 [2024-12-06 01:14:56.457462] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.790 [2024-12-06 01:14:56.470891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.790 [2024-12-06 01:14:56.471358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.790 [2024-12-06 01:14:56.471399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.790 [2024-12-06 01:14:56.471426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.790 [2024-12-06 01:14:56.471712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.790 [2024-12-06 01:14:56.472013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.790 [2024-12-06 01:14:56.472045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.790 [2024-12-06 01:14:56.472067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.790 [2024-12-06 01:14:56.472089] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.790 [2024-12-06 01:14:56.485521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.790 [2024-12-06 01:14:56.485970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.790 [2024-12-06 01:14:56.486011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.790 [2024-12-06 01:14:56.486038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.790 [2024-12-06 01:14:56.486323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.790 [2024-12-06 01:14:56.486612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.790 [2024-12-06 01:14:56.486643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.790 [2024-12-06 01:14:56.486665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.790 [2024-12-06 01:14:56.486686] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.050 [2024-12-06 01:14:56.500176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.050 [2024-12-06 01:14:56.500656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.050 [2024-12-06 01:14:56.500707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.050 [2024-12-06 01:14:56.500739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.050 [2024-12-06 01:14:56.501041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.050 [2024-12-06 01:14:56.501339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.050 [2024-12-06 01:14:56.501370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.050 [2024-12-06 01:14:56.501392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.050 [2024-12-06 01:14:56.501413] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.050 [2024-12-06 01:14:56.514735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.050 [2024-12-06 01:14:56.515214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.050 [2024-12-06 01:14:56.515257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.050 [2024-12-06 01:14:56.515283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.050 [2024-12-06 01:14:56.515580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.050 [2024-12-06 01:14:56.515886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.050 [2024-12-06 01:14:56.515919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.050 [2024-12-06 01:14:56.515940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.050 [2024-12-06 01:14:56.515961] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.050 [2024-12-06 01:14:56.529361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.050 [2024-12-06 01:14:56.529770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.050 [2024-12-06 01:14:56.529811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.050 [2024-12-06 01:14:56.529848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.050 [2024-12-06 01:14:56.530137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.050 [2024-12-06 01:14:56.530442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.050 [2024-12-06 01:14:56.530474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.050 [2024-12-06 01:14:56.530499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.050 [2024-12-06 01:14:56.530523] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.050 [2024-12-06 01:14:56.544079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.050 [2024-12-06 01:14:56.544638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.050 [2024-12-06 01:14:56.544697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.050 [2024-12-06 01:14:56.544723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.050 [2024-12-06 01:14:56.545037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.050 [2024-12-06 01:14:56.545337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.050 [2024-12-06 01:14:56.545368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.050 [2024-12-06 01:14:56.545390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.050 [2024-12-06 01:14:56.545413] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.050 [2024-12-06 01:14:56.558699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.050 [2024-12-06 01:14:56.559245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.050 [2024-12-06 01:14:56.559287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.050 [2024-12-06 01:14:56.559313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.050 [2024-12-06 01:14:56.559602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.050 [2024-12-06 01:14:56.559909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.050 [2024-12-06 01:14:56.559941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.050 [2024-12-06 01:14:56.559963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.050 [2024-12-06 01:14:56.559984] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.051 [2024-12-06 01:14:56.573237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.051 [2024-12-06 01:14:56.573670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.051 [2024-12-06 01:14:56.573711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.051 [2024-12-06 01:14:56.573736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.051 [2024-12-06 01:14:56.574036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.051 [2024-12-06 01:14:56.574329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.051 [2024-12-06 01:14:56.574360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.051 [2024-12-06 01:14:56.574382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.051 [2024-12-06 01:14:56.574403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.051 [2024-12-06 01:14:56.587937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.051 [2024-12-06 01:14:56.588381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.051 [2024-12-06 01:14:56.588440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.051 [2024-12-06 01:14:56.588467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.051 [2024-12-06 01:14:56.588755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.051 [2024-12-06 01:14:56.589058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.051 [2024-12-06 01:14:56.589091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.051 [2024-12-06 01:14:56.589119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.051 [2024-12-06 01:14:56.589143] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.051 [2024-12-06 01:14:56.602665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.051 [2024-12-06 01:14:56.603104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.051 [2024-12-06 01:14:56.603167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.051 [2024-12-06 01:14:56.603193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.051 [2024-12-06 01:14:56.603483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.051 [2024-12-06 01:14:56.603773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.051 [2024-12-06 01:14:56.603804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.051 [2024-12-06 01:14:56.603840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.051 [2024-12-06 01:14:56.603863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.051 [2024-12-06 01:14:56.617188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.051 [2024-12-06 01:14:56.617662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.051 [2024-12-06 01:14:56.617703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.051 [2024-12-06 01:14:56.617730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.051 [2024-12-06 01:14:56.618031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.051 [2024-12-06 01:14:56.618323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.051 [2024-12-06 01:14:56.618355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.051 [2024-12-06 01:14:56.618377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.051 [2024-12-06 01:14:56.618398] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.051 [2024-12-06 01:14:56.631730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.051 [2024-12-06 01:14:56.632221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.051 [2024-12-06 01:14:56.632280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.051 [2024-12-06 01:14:56.632307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.051 [2024-12-06 01:14:56.632595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.051 [2024-12-06 01:14:56.632899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.051 [2024-12-06 01:14:56.632932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.051 [2024-12-06 01:14:56.632954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.051 [2024-12-06 01:14:56.632975] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.051 [2024-12-06 01:14:56.646274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.051 [2024-12-06 01:14:56.646748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.051 [2024-12-06 01:14:56.646789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.051 [2024-12-06 01:14:56.646825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.051 [2024-12-06 01:14:56.647116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.051 [2024-12-06 01:14:56.647408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.051 [2024-12-06 01:14:56.647438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.051 [2024-12-06 01:14:56.647461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.051 [2024-12-06 01:14:56.647481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.051 [2024-12-06 01:14:56.660759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.051 [2024-12-06 01:14:56.661307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.051 [2024-12-06 01:14:56.661349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.051 [2024-12-06 01:14:56.661375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.051 [2024-12-06 01:14:56.661662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.051 [2024-12-06 01:14:56.661965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.051 [2024-12-06 01:14:56.661997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.051 [2024-12-06 01:14:56.662019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.051 [2024-12-06 01:14:56.662041] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.051 [2024-12-06 01:14:56.675360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.051 [2024-12-06 01:14:56.675835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.051 [2024-12-06 01:14:56.675877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.051 [2024-12-06 01:14:56.675903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.051 [2024-12-06 01:14:56.676192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.051 [2024-12-06 01:14:56.676484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.051 [2024-12-06 01:14:56.676515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.051 [2024-12-06 01:14:56.676537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.051 [2024-12-06 01:14:56.676558] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.051 [2024-12-06 01:14:56.689868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.051 [2024-12-06 01:14:56.690338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.051 [2024-12-06 01:14:56.690385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.051 [2024-12-06 01:14:56.690412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.051 [2024-12-06 01:14:56.690701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.051 [2024-12-06 01:14:56.691008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.051 [2024-12-06 01:14:56.691041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.051 [2024-12-06 01:14:56.691064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.051 [2024-12-06 01:14:56.691085] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.051 [2024-12-06 01:14:56.704369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.051 [2024-12-06 01:14:56.704844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.051 [2024-12-06 01:14:56.704896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.051 [2024-12-06 01:14:56.704923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.051 [2024-12-06 01:14:56.705214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.051 [2024-12-06 01:14:56.705506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.051 [2024-12-06 01:14:56.705537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.051 [2024-12-06 01:14:56.705558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.052 [2024-12-06 01:14:56.705579] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.052 2688.40 IOPS, 10.50 MiB/s [2024-12-06T00:14:56.761Z] [2024-12-06 01:14:56.720689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.052 [2024-12-06 01:14:56.721156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.052 [2024-12-06 01:14:56.721198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.052 [2024-12-06 01:14:56.721224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.052 [2024-12-06 01:14:56.721512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.052 [2024-12-06 01:14:56.721805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.052 [2024-12-06 01:14:56.721848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.052 [2024-12-06 01:14:56.721871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.052 [2024-12-06 01:14:56.721892] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.052 [2024-12-06 01:14:56.735387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.052 [2024-12-06 01:14:56.735833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.052 [2024-12-06 01:14:56.735874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.052 [2024-12-06 01:14:56.735900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.052 [2024-12-06 01:14:56.736193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.052 [2024-12-06 01:14:56.736484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.052 [2024-12-06 01:14:56.736515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.052 [2024-12-06 01:14:56.736538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.052 [2024-12-06 01:14:56.736559] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.052 [2024-12-06 01:14:56.750026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.052 [2024-12-06 01:14:56.750433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.052 [2024-12-06 01:14:56.750475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.052 [2024-12-06 01:14:56.750501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.052 [2024-12-06 01:14:56.750791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.052 [2024-12-06 01:14:56.751093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.052 [2024-12-06 01:14:56.751138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.052 [2024-12-06 01:14:56.751161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.052 [2024-12-06 01:14:56.751183] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.312 [2024-12-06 01:14:56.764700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.312 [2024-12-06 01:14:56.765173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.312 [2024-12-06 01:14:56.765214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.312 [2024-12-06 01:14:56.765240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.312 [2024-12-06 01:14:56.765527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.312 [2024-12-06 01:14:56.765828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.312 [2024-12-06 01:14:56.765860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.312 [2024-12-06 01:14:56.765882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.312 [2024-12-06 01:14:56.765912] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.312 [2024-12-06 01:14:56.779208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.312 [2024-12-06 01:14:56.779656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.312 [2024-12-06 01:14:56.779699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.312 [2024-12-06 01:14:56.779725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.312 [2024-12-06 01:14:56.780029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.312 [2024-12-06 01:14:56.780327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.312 [2024-12-06 01:14:56.780358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.312 [2024-12-06 01:14:56.780381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.312 [2024-12-06 01:14:56.780402] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.312 [2024-12-06 01:14:56.793868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.312 [2024-12-06 01:14:56.794321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.312 [2024-12-06 01:14:56.794362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.312 [2024-12-06 01:14:56.794388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.312 [2024-12-06 01:14:56.794674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.312 [2024-12-06 01:14:56.794977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.312 [2024-12-06 01:14:56.795010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.312 [2024-12-06 01:14:56.795033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.312 [2024-12-06 01:14:56.795055] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.312 [2024-12-06 01:14:56.808488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.312 [2024-12-06 01:14:56.808991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.312 [2024-12-06 01:14:56.809033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.312 [2024-12-06 01:14:56.809060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.312 [2024-12-06 01:14:56.809347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.312 [2024-12-06 01:14:56.809637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.312 [2024-12-06 01:14:56.809668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.312 [2024-12-06 01:14:56.809690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.312 [2024-12-06 01:14:56.809712] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.312 [2024-12-06 01:14:56.823219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.312 [2024-12-06 01:14:56.823705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.312 [2024-12-06 01:14:56.823747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.312 [2024-12-06 01:14:56.823774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.312 [2024-12-06 01:14:56.824083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.312 [2024-12-06 01:14:56.824383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.312 [2024-12-06 01:14:56.824414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.312 [2024-12-06 01:14:56.824443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.312 [2024-12-06 01:14:56.824465] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.312 [2024-12-06 01:14:56.837699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.312 [2024-12-06 01:14:56.838176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.312 [2024-12-06 01:14:56.838220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.312 [2024-12-06 01:14:56.838246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.312 [2024-12-06 01:14:56.838532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.312 [2024-12-06 01:14:56.838834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.312 [2024-12-06 01:14:56.838865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.312 [2024-12-06 01:14:56.838887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.312 [2024-12-06 01:14:56.838908] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.312 [2024-12-06 01:14:56.852375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.312 [2024-12-06 01:14:56.852830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.312 [2024-12-06 01:14:56.852872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.312 [2024-12-06 01:14:56.852898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.312 [2024-12-06 01:14:56.853184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.312 [2024-12-06 01:14:56.853472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.312 [2024-12-06 01:14:56.853504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.312 [2024-12-06 01:14:56.853527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.312 [2024-12-06 01:14:56.853548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.312 [2024-12-06 01:14:56.866973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.312 [2024-12-06 01:14:56.867478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.312 [2024-12-06 01:14:56.867519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.312 [2024-12-06 01:14:56.867545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.312 [2024-12-06 01:14:56.867839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.312 [2024-12-06 01:14:56.868131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.312 [2024-12-06 01:14:56.868162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.312 [2024-12-06 01:14:56.868185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.312 [2024-12-06 01:14:56.868206] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3137361 Killed "${NVMF_APP[@]}" "$@" 00:37:24.312 01:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:24.312 01:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:24.312 01:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:24.312 01:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:24.312 01:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:24.312 01:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3138574 00:37:24.312 01:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:24.313 01:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3138574 00:37:24.313 01:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3138574 ']' 00:37:24.313 01:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:24.313 01:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:24.313 01:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:24.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:24.313 01:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:24.313 01:14:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:24.313 [2024-12-06 01:14:56.881460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.313 [2024-12-06 01:14:56.881903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.313 [2024-12-06 01:14:56.881945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.313 [2024-12-06 01:14:56.881972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.313 [2024-12-06 01:14:56.882259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.313 [2024-12-06 01:14:56.882549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.313 [2024-12-06 01:14:56.882580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.313 [2024-12-06 01:14:56.882602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.313 [2024-12-06 01:14:56.882624] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.313 [2024-12-06 01:14:56.896142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.313 [2024-12-06 01:14:56.896601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.313 [2024-12-06 01:14:56.896643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.313 [2024-12-06 01:14:56.896669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.313 [2024-12-06 01:14:56.896979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.313 [2024-12-06 01:14:56.897269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.313 [2024-12-06 01:14:56.897300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.313 [2024-12-06 01:14:56.897322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.313 [2024-12-06 01:14:56.897350] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.313 [2024-12-06 01:14:56.910281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.313 [2024-12-06 01:14:56.910830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.313 [2024-12-06 01:14:56.910869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.313 [2024-12-06 01:14:56.910893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.313 [2024-12-06 01:14:56.911176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.313 [2024-12-06 01:14:56.911437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.313 [2024-12-06 01:14:56.911463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.313 [2024-12-06 01:14:56.911482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.313 [2024-12-06 01:14:56.911501] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.313 [2024-12-06 01:14:56.924251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.313 [2024-12-06 01:14:56.924868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.313 [2024-12-06 01:14:56.924914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.313 [2024-12-06 01:14:56.924941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.313 [2024-12-06 01:14:56.925240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.313 [2024-12-06 01:14:56.925486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.313 [2024-12-06 01:14:56.925514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.313 [2024-12-06 01:14:56.925535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.313 [2024-12-06 01:14:56.925555] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.313 [2024-12-06 01:14:56.938198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.313 [2024-12-06 01:14:56.938631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.313 [2024-12-06 01:14:56.938681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.313 [2024-12-06 01:14:56.938705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.313 [2024-12-06 01:14:56.939006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.313 [2024-12-06 01:14:56.939274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.313 [2024-12-06 01:14:56.939310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.313 [2024-12-06 01:14:56.939329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.313 [2024-12-06 01:14:56.939347] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.313 [2024-12-06 01:14:56.952191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.313 [2024-12-06 01:14:56.952599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.313 [2024-12-06 01:14:56.952636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.313 [2024-12-06 01:14:56.952660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.313 [2024-12-06 01:14:56.952962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.313 [2024-12-06 01:14:56.953231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.313 [2024-12-06 01:14:56.953257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.313 [2024-12-06 01:14:56.953275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.313 [2024-12-06 01:14:56.953308] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.313 [2024-12-06 01:14:56.966171] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:37:24.313 [2024-12-06 01:14:56.966203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.313 [2024-12-06 01:14:56.966292] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:24.313 [2024-12-06 01:14:56.966659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.313 [2024-12-06 01:14:56.966697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.313 [2024-12-06 01:14:56.966720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.313 [2024-12-06 01:14:56.967023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.313 [2024-12-06 01:14:56.967286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.313 [2024-12-06 01:14:56.967311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.313 [2024-12-06 01:14:56.967330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.313 [2024-12-06 01:14:56.967349] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.313 [2024-12-06 01:14:56.980108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.313 [2024-12-06 01:14:56.980487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.313 [2024-12-06 01:14:56.980522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.313 [2024-12-06 01:14:56.980545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.313 [2024-12-06 01:14:56.980789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.313 [2024-12-06 01:14:56.981095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.313 [2024-12-06 01:14:56.981138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.313 [2024-12-06 01:14:56.981158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.313 [2024-12-06 01:14:56.981177] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.313 [2024-12-06 01:14:56.994118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.313 [2024-12-06 01:14:56.994567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.313 [2024-12-06 01:14:56.994604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.313 [2024-12-06 01:14:56.994627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.313 [2024-12-06 01:14:56.994946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.313 [2024-12-06 01:14:56.995243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.313 [2024-12-06 01:14:56.995270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.313 [2024-12-06 01:14:56.995289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.314 [2024-12-06 01:14:56.995307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.314 [2024-12-06 01:14:57.008226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.314 [2024-12-06 01:14:57.008719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.314 [2024-12-06 01:14:57.008770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.314 [2024-12-06 01:14:57.008795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.314 [2024-12-06 01:14:57.009068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.314 [2024-12-06 01:14:57.009349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.314 [2024-12-06 01:14:57.009375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.314 [2024-12-06 01:14:57.009394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.314 [2024-12-06 01:14:57.009412] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.574 [2024-12-06 01:14:57.022449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.574 [2024-12-06 01:14:57.022958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.574 [2024-12-06 01:14:57.022995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.574 [2024-12-06 01:14:57.023018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.574 [2024-12-06 01:14:57.023323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.574 [2024-12-06 01:14:57.023648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.574 [2024-12-06 01:14:57.023678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.574 [2024-12-06 01:14:57.023713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.574 [2024-12-06 01:14:57.023733] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.574 [2024-12-06 01:14:57.036443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.574 [2024-12-06 01:14:57.036885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.574 [2024-12-06 01:14:57.036941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.574 [2024-12-06 01:14:57.036966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.574 [2024-12-06 01:14:57.037271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.574 [2024-12-06 01:14:57.037514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.574 [2024-12-06 01:14:57.037541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.574 [2024-12-06 01:14:57.037560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.574 [2024-12-06 01:14:57.037577] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.574 [2024-12-06 01:14:57.050358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.574 [2024-12-06 01:14:57.050805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.574 [2024-12-06 01:14:57.050851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.574 [2024-12-06 01:14:57.050876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.574 [2024-12-06 01:14:57.051167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.574 [2024-12-06 01:14:57.051409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.574 [2024-12-06 01:14:57.051435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.574 [2024-12-06 01:14:57.051454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.574 [2024-12-06 01:14:57.051472] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.574 [2024-12-06 01:14:57.064558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.574 [2024-12-06 01:14:57.065018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.574 [2024-12-06 01:14:57.065055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.574 [2024-12-06 01:14:57.065079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.574 [2024-12-06 01:14:57.065366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.574 [2024-12-06 01:14:57.065609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.574 [2024-12-06 01:14:57.065634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.574 [2024-12-06 01:14:57.065653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.574 [2024-12-06 01:14:57.065671] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.574 [2024-12-06 01:14:57.079138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.574 [2024-12-06 01:14:57.079626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.574 [2024-12-06 01:14:57.079667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.574 [2024-12-06 01:14:57.079692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.574 [2024-12-06 01:14:57.080019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.574 [2024-12-06 01:14:57.080282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.574 [2024-12-06 01:14:57.080308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.574 [2024-12-06 01:14:57.080326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.574 [2024-12-06 01:14:57.080344] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.574 [2024-12-06 01:14:57.093775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.574 [2024-12-06 01:14:57.094366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.574 [2024-12-06 01:14:57.094407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.574 [2024-12-06 01:14:57.094434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.574 [2024-12-06 01:14:57.094726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.574 [2024-12-06 01:14:57.095016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.574 [2024-12-06 01:14:57.095042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.574 [2024-12-06 01:14:57.095061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.574 [2024-12-06 01:14:57.095079] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.574 [2024-12-06 01:14:57.108382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.574 [2024-12-06 01:14:57.108830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.574 [2024-12-06 01:14:57.108885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.574 [2024-12-06 01:14:57.108909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.574 [2024-12-06 01:14:57.109194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.574 [2024-12-06 01:14:57.109436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.574 [2024-12-06 01:14:57.109462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.575 [2024-12-06 01:14:57.109481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.575 [2024-12-06 01:14:57.109499] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.575 [2024-12-06 01:14:57.123090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.575 [2024-12-06 01:14:57.123556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.575 [2024-12-06 01:14:57.123599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.575 [2024-12-06 01:14:57.123625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.575 [2024-12-06 01:14:57.123931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.575 [2024-12-06 01:14:57.124197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.575 [2024-12-06 01:14:57.124229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.575 [2024-12-06 01:14:57.124248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.575 [2024-12-06 01:14:57.124266] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.575 [2024-12-06 01:14:57.128199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:24.575 [2024-12-06 01:14:57.137692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.575 [2024-12-06 01:14:57.138158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.575 [2024-12-06 01:14:57.138199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.575 [2024-12-06 01:14:57.138225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.575 [2024-12-06 01:14:57.138517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.575 [2024-12-06 01:14:57.138811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.575 [2024-12-06 01:14:57.138867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.575 [2024-12-06 01:14:57.138888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.575 [2024-12-06 01:14:57.138906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.575 [2024-12-06 01:14:57.152476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.575 [2024-12-06 01:14:57.153112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.575 [2024-12-06 01:14:57.153160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.575 [2024-12-06 01:14:57.153188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.575 [2024-12-06 01:14:57.153502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.575 [2024-12-06 01:14:57.153750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.575 [2024-12-06 01:14:57.153777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.575 [2024-12-06 01:14:57.153821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.575 [2024-12-06 01:14:57.153849] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.575 [2024-12-06 01:14:57.167311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.575 [2024-12-06 01:14:57.167837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.575 [2024-12-06 01:14:57.167893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.575 [2024-12-06 01:14:57.167918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.575 [2024-12-06 01:14:57.168208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.575 [2024-12-06 01:14:57.168452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.575 [2024-12-06 01:14:57.168478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.575 [2024-12-06 01:14:57.168501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.575 [2024-12-06 01:14:57.168520] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.575 [2024-12-06 01:14:57.182221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.575 [2024-12-06 01:14:57.182697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.575 [2024-12-06 01:14:57.182738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.575 [2024-12-06 01:14:57.182764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.575 [2024-12-06 01:14:57.183093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.575 [2024-12-06 01:14:57.183368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.575 [2024-12-06 01:14:57.183394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.575 [2024-12-06 01:14:57.183412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.575 [2024-12-06 01:14:57.183430] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.575 [2024-12-06 01:14:57.196898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.575 [2024-12-06 01:14:57.197333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.575 [2024-12-06 01:14:57.197389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.575 [2024-12-06 01:14:57.197416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.575 [2024-12-06 01:14:57.197709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.575 [2024-12-06 01:14:57.198012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.575 [2024-12-06 01:14:57.198040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.575 [2024-12-06 01:14:57.198060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.575 [2024-12-06 01:14:57.198078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.575 [2024-12-06 01:14:57.211527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.575 [2024-12-06 01:14:57.212007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.575 [2024-12-06 01:14:57.212044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.575 [2024-12-06 01:14:57.212067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.575 [2024-12-06 01:14:57.212356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.575 [2024-12-06 01:14:57.212599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.575 [2024-12-06 01:14:57.212624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.575 [2024-12-06 01:14:57.212643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.575 [2024-12-06 01:14:57.212661] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.575 [2024-12-06 01:14:57.226097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.575 [2024-12-06 01:14:57.226574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.575 [2024-12-06 01:14:57.226615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.575 [2024-12-06 01:14:57.226651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.575 [2024-12-06 01:14:57.226958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.575 [2024-12-06 01:14:57.227222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.575 [2024-12-06 01:14:57.227248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.575 [2024-12-06 01:14:57.227267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.575 [2024-12-06 01:14:57.227285] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.575 [2024-12-06 01:14:57.240681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.575 [2024-12-06 01:14:57.241235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.575 [2024-12-06 01:14:57.241277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.575 [2024-12-06 01:14:57.241303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.575 [2024-12-06 01:14:57.241596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.575 [2024-12-06 01:14:57.241905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.575 [2024-12-06 01:14:57.241934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.575 [2024-12-06 01:14:57.241953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.575 [2024-12-06 01:14:57.241973] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.575 [2024-12-06 01:14:57.255548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.575 [2024-12-06 01:14:57.256005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.575 [2024-12-06 01:14:57.256044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.575 [2024-12-06 01:14:57.256067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.575 [2024-12-06 01:14:57.256368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.576 [2024-12-06 01:14:57.256613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.576 [2024-12-06 01:14:57.256638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.576 [2024-12-06 01:14:57.256657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.576 [2024-12-06 01:14:57.256674] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.576 [2024-12-06 01:14:57.267758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:24.576 [2024-12-06 01:14:57.267807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:24.576 [2024-12-06 01:14:57.267863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:24.576 [2024-12-06 01:14:57.267886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:24.576 [2024-12-06 01:14:57.267904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:24.576 [2024-12-06 01:14:57.270204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.576 [2024-12-06 01:14:57.270444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:24.576 [2024-12-06 01:14:57.270527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:24.576 [2024-12-06 01:14:57.270533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:24.576 [2024-12-06 01:14:57.270659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.576 [2024-12-06 01:14:57.270694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.576 [2024-12-06 01:14:57.270718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.576 [2024-12-06 01:14:57.271009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.576 [2024-12-06 01:14:57.271269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.576 [2024-12-06 01:14:57.271297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.576 [2024-12-06 01:14:57.271332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.576 [2024-12-06 01:14:57.271351] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.836 [2024-12-06 01:14:57.284514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.836 [2024-12-06 01:14:57.285113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.836 [2024-12-06 01:14:57.285160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.836 [2024-12-06 01:14:57.285188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.836 [2024-12-06 01:14:57.285492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.836 [2024-12-06 01:14:57.285766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.836 [2024-12-06 01:14:57.285796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.836 [2024-12-06 01:14:57.285834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.836 [2024-12-06 01:14:57.285865] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.836 [2024-12-06 01:14:57.298728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.836 [2024-12-06 01:14:57.299206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.836 [2024-12-06 01:14:57.299244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.836 [2024-12-06 01:14:57.299269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.837 [2024-12-06 01:14:57.299565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.837 [2024-12-06 01:14:57.299844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.837 [2024-12-06 01:14:57.299873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.837 [2024-12-06 01:14:57.299899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.837 [2024-12-06 01:14:57.299920] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.837 [2024-12-06 01:14:57.312913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.837 [2024-12-06 01:14:57.313381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.837 [2024-12-06 01:14:57.313419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.837 [2024-12-06 01:14:57.313442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.837 [2024-12-06 01:14:57.313730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.837 [2024-12-06 01:14:57.314035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.837 [2024-12-06 01:14:57.314064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.837 [2024-12-06 01:14:57.314084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.837 [2024-12-06 01:14:57.314119] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.837 [2024-12-06 01:14:57.327027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.837 [2024-12-06 01:14:57.327474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.837 [2024-12-06 01:14:57.327512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.837 [2024-12-06 01:14:57.327536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.837 [2024-12-06 01:14:57.327850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.837 [2024-12-06 01:14:57.328146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.837 [2024-12-06 01:14:57.328174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.837 [2024-12-06 01:14:57.328208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.837 [2024-12-06 01:14:57.328227] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.837 [2024-12-06 01:14:57.341300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.837 [2024-12-06 01:14:57.341743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.837 [2024-12-06 01:14:57.341780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.837 [2024-12-06 01:14:57.341803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.837 [2024-12-06 01:14:57.342095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.837 [2024-12-06 01:14:57.342367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.837 [2024-12-06 01:14:57.342393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.837 [2024-12-06 01:14:57.342413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.837 [2024-12-06 01:14:57.342432] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.837 [2024-12-06 01:14:57.355561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.837 [2024-12-06 01:14:57.356285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.837 [2024-12-06 01:14:57.356335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.837 [2024-12-06 01:14:57.356363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.837 [2024-12-06 01:14:57.356663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.837 [2024-12-06 01:14:57.356951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.837 [2024-12-06 01:14:57.356981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.837 [2024-12-06 01:14:57.357003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.837 [2024-12-06 01:14:57.357028] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.837 [2024-12-06 01:14:57.369856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.837 [2024-12-06 01:14:57.370496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.837 [2024-12-06 01:14:57.370547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.837 [2024-12-06 01:14:57.370594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.837 [2024-12-06 01:14:57.370931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.837 [2024-12-06 01:14:57.371224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.837 [2024-12-06 01:14:57.371252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.837 [2024-12-06 01:14:57.371275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.837 [2024-12-06 01:14:57.371300] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.837 [2024-12-06 01:14:57.384145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.837 [2024-12-06 01:14:57.384563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.837 [2024-12-06 01:14:57.384601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.837 [2024-12-06 01:14:57.384624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.837 [2024-12-06 01:14:57.384931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.837 [2024-12-06 01:14:57.385229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.837 [2024-12-06 01:14:57.385256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.837 [2024-12-06 01:14:57.385275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.837 [2024-12-06 01:14:57.385293] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.837 [2024-12-06 01:14:57.398262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.837 [2024-12-06 01:14:57.398703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.837 [2024-12-06 01:14:57.398745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.837 [2024-12-06 01:14:57.398769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.837 [2024-12-06 01:14:57.399044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.837 [2024-12-06 01:14:57.399315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.837 [2024-12-06 01:14:57.399342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.837 [2024-12-06 01:14:57.399361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.837 [2024-12-06 01:14:57.399379] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.837 [2024-12-06 01:14:57.412476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.837 [2024-12-06 01:14:57.412886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.837 [2024-12-06 01:14:57.412923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.837 [2024-12-06 01:14:57.412948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.837 [2024-12-06 01:14:57.413241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.837 [2024-12-06 01:14:57.413488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.837 [2024-12-06 01:14:57.413515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.837 [2024-12-06 01:14:57.413534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.837 [2024-12-06 01:14:57.413552] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.837 [2024-12-06 01:14:57.426608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.837 [2024-12-06 01:14:57.427036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.837 [2024-12-06 01:14:57.427073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.837 [2024-12-06 01:14:57.427097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.837 [2024-12-06 01:14:57.427370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.837 [2024-12-06 01:14:57.427618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.837 [2024-12-06 01:14:57.427645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.838 [2024-12-06 01:14:57.427664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.838 [2024-12-06 01:14:57.427682] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.838 [2024-12-06 01:14:57.440658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.838 [2024-12-06 01:14:57.441092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.838 [2024-12-06 01:14:57.441129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.838 [2024-12-06 01:14:57.441152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.838 [2024-12-06 01:14:57.441448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.838 [2024-12-06 01:14:57.441694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.838 [2024-12-06 01:14:57.441721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.838 [2024-12-06 01:14:57.441739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.838 [2024-12-06 01:14:57.441757] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.838 [2024-12-06 01:14:57.454609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.838 [2024-12-06 01:14:57.455037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.838 [2024-12-06 01:14:57.455075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.838 [2024-12-06 01:14:57.455098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.838 [2024-12-06 01:14:57.455381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.838 [2024-12-06 01:14:57.455626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.838 [2024-12-06 01:14:57.455652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.838 [2024-12-06 01:14:57.455672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.838 [2024-12-06 01:14:57.455690] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.838 [2024-12-06 01:14:57.468707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.838 [2024-12-06 01:14:57.469114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.838 [2024-12-06 01:14:57.469152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.838 [2024-12-06 01:14:57.469176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.838 [2024-12-06 01:14:57.469463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.838 [2024-12-06 01:14:57.469709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.838 [2024-12-06 01:14:57.469735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.838 [2024-12-06 01:14:57.469754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.838 [2024-12-06 01:14:57.469772] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.838 [2024-12-06 01:14:57.482904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.838 [2024-12-06 01:14:57.483304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.838 [2024-12-06 01:14:57.483341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.838 [2024-12-06 01:14:57.483364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.838 [2024-12-06 01:14:57.483653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.838 [2024-12-06 01:14:57.483937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.838 [2024-12-06 01:14:57.483965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.838 [2024-12-06 01:14:57.483985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.838 [2024-12-06 01:14:57.484003] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.838 [2024-12-06 01:14:57.497231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.838 [2024-12-06 01:14:57.497903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.838 [2024-12-06 01:14:57.497955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.838 [2024-12-06 01:14:57.497983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.838 [2024-12-06 01:14:57.498262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.838 [2024-12-06 01:14:57.498540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.838 [2024-12-06 01:14:57.498569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.838 [2024-12-06 01:14:57.498593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.838 [2024-12-06 01:14:57.498618] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.838 [2024-12-06 01:14:57.511530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.838 [2024-12-06 01:14:57.512146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.838 [2024-12-06 01:14:57.512195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.838 [2024-12-06 01:14:57.512222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.838 [2024-12-06 01:14:57.512521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.838 [2024-12-06 01:14:57.512776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.838 [2024-12-06 01:14:57.512828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.838 [2024-12-06 01:14:57.512853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.838 [2024-12-06 01:14:57.512877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.838 [2024-12-06 01:14:57.525770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.838 [2024-12-06 01:14:57.526190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.838 [2024-12-06 01:14:57.526227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.838 [2024-12-06 01:14:57.526250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.838 [2024-12-06 01:14:57.526520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.838 [2024-12-06 01:14:57.526771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.838 [2024-12-06 01:14:57.526813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.838 [2024-12-06 01:14:57.526852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.838 [2024-12-06 01:14:57.526888] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.838 [2024-12-06 01:14:57.540084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.838 [2024-12-06 01:14:57.540556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.838 [2024-12-06 01:14:57.540593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.838 [2024-12-06 01:14:57.540616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.838 [2024-12-06 01:14:57.540895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.838 [2024-12-06 01:14:57.541164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.838 [2024-12-06 01:14:57.541193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.838 [2024-12-06 01:14:57.541214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.838 [2024-12-06 01:14:57.541233] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.098 [2024-12-06 01:14:57.554251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.098 [2024-12-06 01:14:57.554627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.098 [2024-12-06 01:14:57.554680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.098 [2024-12-06 01:14:57.554704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.098 [2024-12-06 01:14:57.554979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.098 [2024-12-06 01:14:57.555270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.098 [2024-12-06 01:14:57.555297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.098 [2024-12-06 01:14:57.555316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.098 [2024-12-06 01:14:57.555334] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.098 [2024-12-06 01:14:57.568273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.098 [2024-12-06 01:14:57.568697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.098 [2024-12-06 01:14:57.568733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.098 [2024-12-06 01:14:57.568756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.098 [2024-12-06 01:14:57.569026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.098 [2024-12-06 01:14:57.569346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.098 [2024-12-06 01:14:57.569374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.098 [2024-12-06 01:14:57.569393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.098 [2024-12-06 01:14:57.569411] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.098 [2024-12-06 01:14:57.582378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.098 [2024-12-06 01:14:57.582861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.098 [2024-12-06 01:14:57.582899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.098 [2024-12-06 01:14:57.582923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.098 [2024-12-06 01:14:57.583211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.098 [2024-12-06 01:14:57.583458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.098 [2024-12-06 01:14:57.583484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.098 [2024-12-06 01:14:57.583503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.099 [2024-12-06 01:14:57.583521] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.099 [2024-12-06 01:14:57.596454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.099 [2024-12-06 01:14:57.596851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.099 [2024-12-06 01:14:57.596890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.099 [2024-12-06 01:14:57.596913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.099 [2024-12-06 01:14:57.597204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.099 [2024-12-06 01:14:57.597454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.099 [2024-12-06 01:14:57.597481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.099 [2024-12-06 01:14:57.597500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.099 [2024-12-06 01:14:57.597518] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.099 [2024-12-06 01:14:57.610735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.099 [2024-12-06 01:14:57.611204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.099 [2024-12-06 01:14:57.611245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.099 [2024-12-06 01:14:57.611269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.099 [2024-12-06 01:14:57.611567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.099 [2024-12-06 01:14:57.611844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.099 [2024-12-06 01:14:57.611872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.099 [2024-12-06 01:14:57.611893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.099 [2024-12-06 01:14:57.611914] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.099 [2024-12-06 01:14:57.624859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.099 [2024-12-06 01:14:57.625327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.099 [2024-12-06 01:14:57.625370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.099 [2024-12-06 01:14:57.625395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.099 [2024-12-06 01:14:57.625688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.099 [2024-12-06 01:14:57.625999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.099 [2024-12-06 01:14:57.626028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.099 [2024-12-06 01:14:57.626048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.099 [2024-12-06 01:14:57.626067] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.099 [2024-12-06 01:14:57.639200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.099 [2024-12-06 01:14:57.639631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.099 [2024-12-06 01:14:57.639669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.099 [2024-12-06 01:14:57.639693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.099 [2024-12-06 01:14:57.639975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.099 [2024-12-06 01:14:57.640265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.099 [2024-12-06 01:14:57.640292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.099 [2024-12-06 01:14:57.640312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.099 [2024-12-06 01:14:57.640331] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.099 [2024-12-06 01:14:57.653389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.099 [2024-12-06 01:14:57.653871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.099 [2024-12-06 01:14:57.653909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.099 [2024-12-06 01:14:57.653933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.099 [2024-12-06 01:14:57.654218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.099 [2024-12-06 01:14:57.654481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.099 [2024-12-06 01:14:57.654507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.099 [2024-12-06 01:14:57.654525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.099 [2024-12-06 01:14:57.654544] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.099 [2024-12-06 01:14:57.667648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.099 [2024-12-06 01:14:57.668047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.099 [2024-12-06 01:14:57.668086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.099 [2024-12-06 01:14:57.668110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.099 [2024-12-06 01:14:57.668389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.099 [2024-12-06 01:14:57.668635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.099 [2024-12-06 01:14:57.668663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.099 [2024-12-06 01:14:57.668681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.099 [2024-12-06 01:14:57.668700] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.099 [2024-12-06 01:14:57.681759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.099 [2024-12-06 01:14:57.682191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.099 [2024-12-06 01:14:57.682228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.099 [2024-12-06 01:14:57.682255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.099 [2024-12-06 01:14:57.682527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.099 [2024-12-06 01:14:57.682773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.099 [2024-12-06 01:14:57.682827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.099 [2024-12-06 01:14:57.682850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.099 [2024-12-06 01:14:57.682886] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.099 [2024-12-06 01:14:57.695884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.099 [2024-12-06 01:14:57.696328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.099 [2024-12-06 01:14:57.696365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.099 [2024-12-06 01:14:57.696388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.099 [2024-12-06 01:14:57.696674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.099 [2024-12-06 01:14:57.696951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.099 [2024-12-06 01:14:57.696979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.099 [2024-12-06 01:14:57.696999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.099 [2024-12-06 01:14:57.697018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.099 [2024-12-06 01:14:57.709945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.099 [2024-12-06 01:14:57.710392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.099 [2024-12-06 01:14:57.710430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.099 [2024-12-06 01:14:57.710454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.099 [2024-12-06 01:14:57.710758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.099 [2024-12-06 01:14:57.711065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.099 [2024-12-06 01:14:57.711102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.099 [2024-12-06 01:14:57.711143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.099 [2024-12-06 01:14:57.711162] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.099 2240.33 IOPS, 8.75 MiB/s [2024-12-06T00:14:57.808Z] [2024-12-06 01:14:57.724035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.099 [2024-12-06 01:14:57.724538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.099 [2024-12-06 01:14:57.724576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.099 [2024-12-06 01:14:57.724599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.099 [2024-12-06 01:14:57.724889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.099 [2024-12-06 01:14:57.725159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.100 [2024-12-06 01:14:57.725186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.100 [2024-12-06 01:14:57.725204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.100 [2024-12-06 01:14:57.725222] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.100 [2024-12-06 01:14:57.738345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.100 [2024-12-06 01:14:57.738847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.100 [2024-12-06 01:14:57.738885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.100 [2024-12-06 01:14:57.738909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.100 [2024-12-06 01:14:57.739181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.100 [2024-12-06 01:14:57.739433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.100 [2024-12-06 01:14:57.739460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.100 [2024-12-06 01:14:57.739478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.100 [2024-12-06 01:14:57.739496] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.100 [2024-12-06 01:14:57.752472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.100 [2024-12-06 01:14:57.752882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.100 [2024-12-06 01:14:57.752919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.100 [2024-12-06 01:14:57.752942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.100 [2024-12-06 01:14:57.753215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.100 [2024-12-06 01:14:57.753493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.100 [2024-12-06 01:14:57.753522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.100 [2024-12-06 01:14:57.753548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.100 [2024-12-06 01:14:57.753568] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.100 [2024-12-06 01:14:57.766514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.100 [2024-12-06 01:14:57.766918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.100 [2024-12-06 01:14:57.766955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.100 [2024-12-06 01:14:57.766979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.100 [2024-12-06 01:14:57.767250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.100 [2024-12-06 01:14:57.767496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.100 [2024-12-06 01:14:57.767522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.100 [2024-12-06 01:14:57.767555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.100 [2024-12-06 01:14:57.767574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.100 [2024-12-06 01:14:57.780610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.100 [2024-12-06 01:14:57.781021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.100 [2024-12-06 01:14:57.781058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.100 [2024-12-06 01:14:57.781082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.100 [2024-12-06 01:14:57.781370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.100 [2024-12-06 01:14:57.781617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.100 [2024-12-06 01:14:57.781643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.100 [2024-12-06 01:14:57.781662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.100 [2024-12-06 01:14:57.781679] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.100 [2024-12-06 01:14:57.794776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.100 [2024-12-06 01:14:57.795259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.100 [2024-12-06 01:14:57.795298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.100 [2024-12-06 01:14:57.795322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.100 [2024-12-06 01:14:57.795609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.100 [2024-12-06 01:14:57.795884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.100 [2024-12-06 01:14:57.795913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.100 [2024-12-06 01:14:57.795932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.100 [2024-12-06 01:14:57.795951] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.360 [2024-12-06 01:14:57.808922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.360 [2024-12-06 01:14:57.809365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.360 [2024-12-06 01:14:57.809402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.360 [2024-12-06 01:14:57.809426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.360 [2024-12-06 01:14:57.809724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.360 [2024-12-06 01:14:57.810014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.360 [2024-12-06 01:14:57.810043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.360 [2024-12-06 01:14:57.810063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.360 [2024-12-06 01:14:57.810083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.360 [2024-12-06 01:14:57.823126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.360 [2024-12-06 01:14:57.823518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.360 [2024-12-06 01:14:57.823555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.360 [2024-12-06 01:14:57.823578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.360 [2024-12-06 01:14:57.823861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.360 [2024-12-06 01:14:57.824117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.360 [2024-12-06 01:14:57.824158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.360 [2024-12-06 01:14:57.824177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.360 [2024-12-06 01:14:57.824196] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.360 [2024-12-06 01:14:57.837293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.360 [2024-12-06 01:14:57.837694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.360 [2024-12-06 01:14:57.837732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.360 [2024-12-06 01:14:57.837755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.360 [2024-12-06 01:14:57.838038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.360 [2024-12-06 01:14:57.838301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.360 [2024-12-06 01:14:57.838328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.360 [2024-12-06 01:14:57.838347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.360 [2024-12-06 01:14:57.838365] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.360 [2024-12-06 01:14:57.851428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.360 [2024-12-06 01:14:57.851930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.360 [2024-12-06 01:14:57.851968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.360 [2024-12-06 01:14:57.851997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.360 [2024-12-06 01:14:57.852285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.360 [2024-12-06 01:14:57.852530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.360 [2024-12-06 01:14:57.852556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.360 [2024-12-06 01:14:57.852575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.360 [2024-12-06 01:14:57.852594] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.360 [2024-12-06 01:14:57.865581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.360 [2024-12-06 01:14:57.865993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.360 [2024-12-06 01:14:57.866031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.360 [2024-12-06 01:14:57.866054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.360 [2024-12-06 01:14:57.866339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.360 [2024-12-06 01:14:57.866585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.361 [2024-12-06 01:14:57.866612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.361 [2024-12-06 01:14:57.866631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.361 [2024-12-06 01:14:57.866648] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.361 [2024-12-06 01:14:57.879748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.361 [2024-12-06 01:14:57.880262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.361 [2024-12-06 01:14:57.880299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.361 [2024-12-06 01:14:57.880323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.361 [2024-12-06 01:14:57.880608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.361 [2024-12-06 01:14:57.880885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.361 [2024-12-06 01:14:57.880914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.361 [2024-12-06 01:14:57.880934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.361 [2024-12-06 01:14:57.880953] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.361 [2024-12-06 01:14:57.893925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.361 [2024-12-06 01:14:57.894365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.361 [2024-12-06 01:14:57.894402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.361 [2024-12-06 01:14:57.894426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.361 [2024-12-06 01:14:57.894720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.361 [2024-12-06 01:14:57.894996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.361 [2024-12-06 01:14:57.895024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.361 [2024-12-06 01:14:57.895044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.361 [2024-12-06 01:14:57.895063] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.361 [2024-12-06 01:14:57.908268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.361 [2024-12-06 01:14:57.908707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.361 [2024-12-06 01:14:57.908744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.361 [2024-12-06 01:14:57.908769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.361 [2024-12-06 01:14:57.909041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.361 [2024-12-06 01:14:57.909305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.361 [2024-12-06 01:14:57.909334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.361 [2024-12-06 01:14:57.909354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.361 [2024-12-06 01:14:57.909375] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.361 01:14:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:25.361 01:14:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:25.361 01:14:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:25.361 01:14:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:25.361 01:14:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:25.361 [2024-12-06 01:14:57.922524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.361 [2024-12-06 01:14:57.922921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.361 [2024-12-06 01:14:57.922958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.361 [2024-12-06 01:14:57.922982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.361 [2024-12-06 01:14:57.923254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.361 [2024-12-06 01:14:57.923502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.361 [2024-12-06 01:14:57.923529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.361 [2024-12-06 01:14:57.923549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.361 [2024-12-06 01:14:57.923567] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.361 01:14:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:25.361 01:14:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:25.361 01:14:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.361 01:14:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:25.361 [2024-12-06 01:14:57.936740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.361 [2024-12-06 01:14:57.937050] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:25.361 [2024-12-06 01:14:57.937161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.361 [2024-12-06 01:14:57.937199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.361 [2024-12-06 01:14:57.937223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.361 [2024-12-06 01:14:57.937513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.361 [2024-12-06 01:14:57.937760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.361 [2024-12-06 01:14:57.937787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.361 [2024-12-06 01:14:57.937841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.361 [2024-12-06 01:14:57.937877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.361 01:14:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.361 01:14:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:25.361 01:14:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.361 01:14:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:25.361 [2024-12-06 01:14:57.951291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.361 [2024-12-06 01:14:57.951703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.361 [2024-12-06 01:14:57.951752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.361 [2024-12-06 01:14:57.951776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.361 [2024-12-06 01:14:57.952052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.361 [2024-12-06 01:14:57.952339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.361 [2024-12-06 01:14:57.952365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.361 [2024-12-06 01:14:57.952384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.361 [2024-12-06 01:14:57.952402] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.361 [2024-12-06 01:14:57.965532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.361 [2024-12-06 01:14:57.966122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.361 [2024-12-06 01:14:57.966168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.361 [2024-12-06 01:14:57.966195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.361 [2024-12-06 01:14:57.966495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.361 [2024-12-06 01:14:57.966752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.361 [2024-12-06 01:14:57.966779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.361 [2024-12-06 01:14:57.966835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.361 [2024-12-06 01:14:57.966877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.361 [2024-12-06 01:14:57.979750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.361 [2024-12-06 01:14:57.980395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.361 [2024-12-06 01:14:57.980444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.361 [2024-12-06 01:14:57.980472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.361 [2024-12-06 01:14:57.980772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.361 [2024-12-06 01:14:57.981057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.361 [2024-12-06 01:14:57.981087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.361 [2024-12-06 01:14:57.981134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.361 [2024-12-06 01:14:57.981156] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.361 [2024-12-06 01:14:57.993931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.361 [2024-12-06 01:14:57.994411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.362 [2024-12-06 01:14:57.994449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.362 [2024-12-06 01:14:57.994472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.362 [2024-12-06 01:14:57.994765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.362 [2024-12-06 01:14:57.995073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.362 [2024-12-06 01:14:57.995103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.362 [2024-12-06 01:14:57.995123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.362 [2024-12-06 01:14:57.995143] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.362 [2024-12-06 01:14:58.008001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.362 [2024-12-06 01:14:58.008426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.362 [2024-12-06 01:14:58.008463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.362 [2024-12-06 01:14:58.008487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.362 [2024-12-06 01:14:58.008765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.362 [2024-12-06 01:14:58.009056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.362 [2024-12-06 01:14:58.009086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.362 [2024-12-06 01:14:58.009107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.362 [2024-12-06 01:14:58.009126] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.362 [2024-12-06 01:14:58.022420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.362 [2024-12-06 01:14:58.022856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.362 [2024-12-06 01:14:58.022894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.362 [2024-12-06 01:14:58.022918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.362 [2024-12-06 01:14:58.023214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.362 [2024-12-06 01:14:58.023466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.362 [2024-12-06 01:14:58.023492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.362 [2024-12-06 01:14:58.023512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.362 [2024-12-06 01:14:58.023530] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.362 Malloc0 00:37:25.362 01:14:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.362 01:14:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:25.362 01:14:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.362 01:14:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:25.362 [2024-12-06 01:14:58.036638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.362 [2024-12-06 01:14:58.037080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.362 [2024-12-06 01:14:58.037118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.362 [2024-12-06 01:14:58.037142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.362 [2024-12-06 01:14:58.037431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.362 [2024-12-06 01:14:58.037698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.362 [2024-12-06 01:14:58.037725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.362 [2024-12-06 01:14:58.037745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.362 [2024-12-06 01:14:58.037765] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.362 01:14:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.362 01:14:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:25.362 01:14:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.362 01:14:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:25.362 01:14:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.362 01:14:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:25.362 01:14:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.362 01:14:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:25.362 [2024-12-06 01:14:58.050194] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:25.362 [2024-12-06 01:14:58.050963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.362 01:14:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.362 01:14:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3137849 00:37:25.620 [2024-12-06 01:14:58.092105] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:37:27.124 2441.00 IOPS, 9.54 MiB/s [2024-12-06T00:15:00.768Z] 2893.62 IOPS, 11.30 MiB/s [2024-12-06T00:15:02.142Z] 3219.56 IOPS, 12.58 MiB/s [2024-12-06T00:15:03.077Z] 3494.80 IOPS, 13.65 MiB/s [2024-12-06T00:15:04.011Z] 3717.73 IOPS, 14.52 MiB/s [2024-12-06T00:15:04.946Z] 3909.58 IOPS, 15.27 MiB/s [2024-12-06T00:15:05.879Z] 4069.62 IOPS, 15.90 MiB/s [2024-12-06T00:15:06.810Z] 4209.36 IOPS, 16.44 MiB/s [2024-12-06T00:15:06.811Z] 4326.93 IOPS, 16.90 MiB/s 00:37:34.102 Latency(us) 00:37:34.102 [2024-12-06T00:15:06.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:34.102 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:34.102 Verification LBA range: start 0x0 length 0x4000 00:37:34.102 Nvme1n1 : 15.01 4332.10 16.92 9395.96 0.00 9295.76 1195.43 43690.67 00:37:34.102 [2024-12-06T00:15:06.811Z] =================================================================================================================== 00:37:34.102 [2024-12-06T00:15:06.811Z] Total : 4332.10 16.92 9395.96 0.00 9295.76 1195.43 43690.67 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:35.034 rmmod nvme_tcp 00:37:35.034 rmmod nvme_fabrics 00:37:35.034 rmmod nvme_keyring 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3138574 ']' 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3138574 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3138574 ']' 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3138574 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3138574 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3138574' 00:37:35.034 killing process with pid 3138574 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3138574 00:37:35.034 01:15:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3138574 00:37:36.404 01:15:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:36.405 01:15:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:36.405 01:15:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:36.405 01:15:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:37:36.405 01:15:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:37:36.405 01:15:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:36.405 01:15:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:37:36.405 01:15:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:36.405 01:15:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:36.405 01:15:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:36.405 01:15:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:36.405 01:15:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:38.303 01:15:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:38.303 00:37:38.303 real 0m26.524s 00:37:38.303 user 1m12.501s 00:37:38.303 sys 0m4.708s 00:37:38.303 01:15:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:38.303 01:15:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:38.303 ************************************ 00:37:38.303 END TEST nvmf_bdevperf 00:37:38.303 ************************************ 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.561 ************************************ 00:37:38.561 START TEST nvmf_target_disconnect 00:37:38.561 ************************************ 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:38.561 * Looking for test storage... 00:37:38.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:38.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.561 --rc genhtml_branch_coverage=1 00:37:38.561 --rc genhtml_function_coverage=1 00:37:38.561 --rc genhtml_legend=1 00:37:38.561 --rc geninfo_all_blocks=1 00:37:38.561 --rc geninfo_unexecuted_blocks=1 00:37:38.561 00:37:38.561 ' 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:38.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.561 --rc genhtml_branch_coverage=1 00:37:38.561 --rc genhtml_function_coverage=1 00:37:38.561 --rc genhtml_legend=1 00:37:38.561 --rc geninfo_all_blocks=1 00:37:38.561 --rc geninfo_unexecuted_blocks=1 00:37:38.561 00:37:38.561 ' 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:38.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.561 --rc genhtml_branch_coverage=1 00:37:38.561 --rc genhtml_function_coverage=1 00:37:38.561 --rc genhtml_legend=1 00:37:38.561 --rc geninfo_all_blocks=1 00:37:38.561 --rc geninfo_unexecuted_blocks=1 00:37:38.561 00:37:38.561 ' 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:38.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.561 --rc genhtml_branch_coverage=1 00:37:38.561 --rc genhtml_function_coverage=1 00:37:38.561 --rc genhtml_legend=1 00:37:38.561 --rc geninfo_all_blocks=1 00:37:38.561 --rc geninfo_unexecuted_blocks=1 00:37:38.561 00:37:38.561 ' 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:38.561 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:38.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:37:38.562 01:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:41.095 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:41.095 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:41.095 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:41.095 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:41.095 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:41.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:41.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:37:41.096 00:37:41.096 --- 10.0.0.2 ping statistics --- 00:37:41.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:41.096 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:41.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:41.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:37:41.096 00:37:41.096 --- 10.0.0.1 ping statistics --- 00:37:41.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:41.096 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:41.096 ************************************ 00:37:41.096 START TEST nvmf_target_disconnect_tc1 00:37:41.096 ************************************ 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:41.096 [2024-12-06 01:15:13.547545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.096 [2024-12-06 01:15:13.547675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:37:41.096 [2024-12-06 01:15:13.547767] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:41.096 [2024-12-06 01:15:13.547805] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:41.096 [2024-12-06 01:15:13.547857] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:37:41.096 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:37:41.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:41.096 Initializing NVMe Controllers 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:41.096 00:37:41.096 real 0m0.229s 00:37:41.096 user 0m0.100s 00:37:41.096 sys 0m0.129s 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:41.096 ************************************ 00:37:41.096 END TEST nvmf_target_disconnect_tc1 00:37:41.096 ************************************ 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:41.096 ************************************ 00:37:41.096 START TEST nvmf_target_disconnect_tc2 00:37:41.096 ************************************ 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3142551 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3142551 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3142551 ']' 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:41.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:41.096 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:41.097 01:15:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.097 [2024-12-06 01:15:13.722149] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:37:41.097 [2024-12-06 01:15:13.722277] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:41.355 [2024-12-06 01:15:13.862211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:41.355 [2024-12-06 01:15:13.985241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:41.355 [2024-12-06 01:15:13.985341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:41.355 [2024-12-06 01:15:13.985375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:41.355 [2024-12-06 01:15:13.985408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:41.355 [2024-12-06 01:15:13.985434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:41.355 [2024-12-06 01:15:13.988058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:41.355 [2024-12-06 01:15:13.988125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:41.355 [2024-12-06 01:15:13.988238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:41.355 [2024-12-06 01:15:13.988242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.289 Malloc0 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.289 [2024-12-06 01:15:14.851149] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.289 [2024-12-06 01:15:14.881292] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3142764 00:37:42.289 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:42.290 01:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:44.244 01:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3142551 00:37:44.244 01:15:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:44.244 Read completed with error (sct=0, sc=8) 00:37:44.244 starting I/O failed 00:37:44.244 Read completed with error (sct=0, sc=8) 00:37:44.244 starting I/O failed 00:37:44.244 Read completed with error (sct=0, sc=8) 00:37:44.244 starting I/O failed 00:37:44.244 Read completed with error (sct=0, sc=8) 00:37:44.244 starting I/O failed 00:37:44.244 Read completed with error (sct=0, sc=8) 00:37:44.244 starting I/O failed 00:37:44.244 Read completed with error (sct=0, sc=8) 00:37:44.244 starting I/O failed 00:37:44.244 Read completed with error (sct=0, sc=8) 00:37:44.244 starting I/O failed 00:37:44.244 Read completed with error (sct=0, sc=8) 00:37:44.244 starting I/O failed 00:37:44.244 Read completed with error (sct=0, sc=8) 00:37:44.244 starting I/O failed 00:37:44.244 Read completed with error (sct=0, sc=8) 00:37:44.244 starting I/O failed 00:37:44.244 Read completed with error (sct=0, sc=8) 00:37:44.244 starting I/O failed 00:37:44.244 Read completed with error (sct=0, sc=8) 00:37:44.244 starting I/O failed 00:37:44.244 Read completed with error (sct=0, sc=8) 00:37:44.244 starting I/O failed 00:37:44.244 Write completed with error (sct=0, sc=8) 00:37:44.244 starting I/O failed 00:37:44.244 Read completed with error (sct=0, sc=8) 00:37:44.244 starting I/O failed 00:37:44.244 Write completed with error (sct=0, sc=8) 00:37:44.244 starting I/O failed 00:37:44.244 Read completed with error (sct=0, sc=8) 00:37:44.244 starting I/O failed 00:37:44.244 Read completed with error (sct=0, sc=8) 00:37:44.244 starting I/O failed 00:37:44.244 Read completed with error (sct=0, sc=8) 00:37:44.244 starting I/O failed 00:37:44.244 Read completed with error (sct=0, sc=8) 00:37:44.244 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Write completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Write completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Write completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Write completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Write completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 [2024-12-06 01:15:16.924461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Write completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Write completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Write completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Write completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Write completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Write completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Write completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Write completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 Read completed with error (sct=0, sc=8) 00:37:44.245 starting I/O failed 00:37:44.245 [2024-12-06 01:15:16.925136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.245 [2024-12-06 01:15:16.925441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.925482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.925751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.925791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.925957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.925995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.926195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.926228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.926371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.926422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.926662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.926736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.926948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.926982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.927162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.927200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.927386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.927419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.927564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.927598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.927771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.927810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.928036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.928072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.928242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.928277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.928443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.928477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.928628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.928661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.928808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.928937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.929059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.929094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.929227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.929263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.929428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.929462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.929598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.929632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.929794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.929841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.930001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.930036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.930157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.930213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-12-06 01:15:16.930408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-12-06 01:15:16.930446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.930583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.930616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.930778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.930812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.930929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.930962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.931110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.931144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.931292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.931330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.931480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.931517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.931702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.931736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.931877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.931911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.932075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.932113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.932271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.932304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.932451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.932485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.932621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.932654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.932785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.932834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.932959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.932993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.933192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.933229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.933384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.933417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.933520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.933553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.933687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.933725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.933924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.933958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.934062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.934114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.934284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.934320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.934501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.934535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.934712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.934749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.934908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.934947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.935079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.935113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.935216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.935249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.935446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.935484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.935620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.935672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.935838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.935872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.936034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.936086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.936316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.936354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.936495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.936534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.936698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.936732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.936904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.936938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.937053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.937086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.937293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.937343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.937508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.937542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.937681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.937714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.937833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.937871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-12-06 01:15:16.937971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-12-06 01:15:16.938012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.938148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.938182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.938341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.938380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.938546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.938580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.938718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.938752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.938910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.938945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.939084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.939118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.939279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.939312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.939453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.939486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.939594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.939627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.939785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.939826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.939959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.939995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.940140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.940182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.940318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.940352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.940493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.940528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.940697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.940732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 Read completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Write completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Write completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Write completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Write completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Read completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Read completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Read completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Read completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Write completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Read completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Read completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Read completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Write completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Read completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Read completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Read completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Read completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Read completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Write completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Read completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Read completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Read completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Write completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Write completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Write completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Write completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Read completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Write completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Write completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Read completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 Write completed with error (sct=0, sc=8) 00:37:44.247 starting I/O failed 00:37:44.247 [2024-12-06 01:15:16.941368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.247 [2024-12-06 01:15:16.941561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.941614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.941766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.941812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.941962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.941996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.942132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.942166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.942303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.942344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.942479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.942513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.942721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.942754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.942915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.942950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.943082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.943117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.943238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.943271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.943434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.943468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-12-06 01:15:16.943614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-12-06 01:15:16.943646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.943845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.943878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.943989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.944022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.944188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.944221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.944435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.944469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.944624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.944657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.944831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.944866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.944986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.945020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.945185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.945218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.945351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.945385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.945525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.945558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.945716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.945750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.945970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.946005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.946122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.946156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.946298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.946332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.946477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.946510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.946678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.946714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.946853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.946888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.947005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.947039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.947202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.947235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.947368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.947402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.947546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.947579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.947756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.947789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.947951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.947985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.948095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.948156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.948314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.948364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.948486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.948520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.948659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.948709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.948928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.948962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.949079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.949119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.949232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.949265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.949392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.949428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.949553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.949586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.949696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.949734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.949886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.949938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.950049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.950083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.950183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.950217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.950349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.950383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-12-06 01:15:16.950498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-12-06 01:15:16.950532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.528 [2024-12-06 01:15:16.950674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-12-06 01:15:16.950707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-12-06 01:15:16.950846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-12-06 01:15:16.950896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-12-06 01:15:16.951022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-12-06 01:15:16.951060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-12-06 01:15:16.951187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-12-06 01:15:16.951221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-12-06 01:15:16.951336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-12-06 01:15:16.951370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-12-06 01:15:16.951498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-12-06 01:15:16.951531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-12-06 01:15:16.951668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-12-06 01:15:16.951701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-12-06 01:15:16.951847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-12-06 01:15:16.951881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-12-06 01:15:16.952023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-12-06 01:15:16.952057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-12-06 01:15:16.952249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-12-06 01:15:16.952287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-12-06 01:15:16.952446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-12-06 01:15:16.952483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-12-06 01:15:16.952644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-12-06 01:15:16.952678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-12-06 01:15:16.952836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-12-06 01:15:16.952871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-12-06 01:15:16.952978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.953012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.953177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.953210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.953341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.953374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.953477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.953510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.953668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.953714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.953874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.953910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.954035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.954069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.954171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.954205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.954348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.954400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.954616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.954650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.954811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.954853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.955059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.955093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.955253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.955294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.955387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.955421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.955527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.955560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.955737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.955775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.956046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.956081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.956230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.956263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.956376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.956421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.956580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.956614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.956780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.956828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.956981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.957019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.957160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.957195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.957324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.957383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.957528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.957565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.957729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.957763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.957919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.957953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.958088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.958123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.958327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.958360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.958475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.958508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.958664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.958711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.958843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.958886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-12-06 01:15:16.959002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-12-06 01:15:16.959037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.959150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.959191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.959303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.959337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.959481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.959515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.959650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.959685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.959805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.959846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.959959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.959993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.960125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.960158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.960285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.960319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.960437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.960489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.960614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.960668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.960820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.960855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.960974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.961008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.961189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.961226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.961379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.961421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.961558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.961593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.961718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.961753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.961913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.961947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.962079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.962113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.962209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.962243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.962352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.962385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.962514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.962548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.962691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.962729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.962910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.962945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.963080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.963131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.963290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.963331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.963513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.963547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.963676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.963710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.963848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.963883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.964021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.964059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.964228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.964266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.964423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.964457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.964591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-12-06 01:15:16.964624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-12-06 01:15:16.964838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.964892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.965031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.965065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.965176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.965211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.965341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.965390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.965544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.965583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.965714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.965767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.965959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.965993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.966131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.966168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.966364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.966397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.966602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.966663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.966825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.966879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.967030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.967065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.967181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.967232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.967353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.967391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.967522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.967556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.967668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.967701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.967811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.967882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.968046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.968080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.968236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.968273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.968453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.968487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.968584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.968618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.968833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.968886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.969023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.969057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.969214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.969247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.969392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.969426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.969629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.969663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.969794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.969834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.970049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.970082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.970301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.970362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.970490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.970523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.970626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.970659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-12-06 01:15:16.970796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-12-06 01:15:16.970860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.971007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.971043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.971170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.971205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.971348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.971382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.971561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.971594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.971746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.971792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.971969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.972003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.972111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.972144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.972280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.972338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.972463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.972517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.972651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.972685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.972862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.972915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.973048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.973081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.973204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.973238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.973343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.973377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.973530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.973567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.973693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.973726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.973878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.973913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.974020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.974056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.974245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.974279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.974456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.974494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.974651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.974688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.974826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.974860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.974972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.975005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.975116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.975152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.975291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.975324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.975481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.975519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.975672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.975710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.975857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.975892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.976006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.976040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.976182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.976215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.976329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.976362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.976478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.976511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.976623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-12-06 01:15:16.976657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-12-06 01:15:16.976794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-12-06 01:15:16.976865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-12-06 01:15:16.977009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-12-06 01:15:16.977042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-12-06 01:15:16.977203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-12-06 01:15:16.977237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-12-06 01:15:16.977369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-12-06 01:15:16.977402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-12-06 01:15:16.977537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-12-06 01:15:16.977570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-12-06 01:15:16.977735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-12-06 01:15:16.977768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-12-06 01:15:16.977891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-12-06 01:15:16.977925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-12-06 01:15:16.978025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-12-06 01:15:16.978058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-12-06 01:15:16.978170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-12-06 01:15:16.978203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-12-06 01:15:16.978321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-12-06 01:15:16.978356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-12-06 01:15:16.978489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-12-06 01:15:16.978523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-12-06 01:15:16.978684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-12-06 01:15:16.978722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-12-06 01:15:16.978844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-12-06 01:15:16.978885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-12-06 01:15:16.978998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-12-06 01:15:16.979032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-12-06 01:15:16.979189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-12-06 01:15:16.979248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-12-06 01:15:16.979402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-12-06 01:15:16.979438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-12-06 01:15:16.979572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-12-06 01:15:16.979607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-12-06 01:15:16.979746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-12-06 01:15:16.979781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 Read completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Read completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Read completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Read completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Write completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Write completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Write completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Write completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Read completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Write completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Write completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Write completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Write completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Write completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Write completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Write completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Write completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Write completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Read completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Read completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Read completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Read completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Write completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Read completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Read completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Read completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Write completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Read completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Read completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Read completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Read completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.533 Read completed with error (sct=0, sc=8) 00:37:44.533 starting I/O failed 00:37:44.534 [2024-12-06 01:15:16.980414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:44.534 [2024-12-06 01:15:16.980620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.980693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.980902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.980940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.981050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.981085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.981235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.981270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.981434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.981468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.981608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.981643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.981822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.981876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.981997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.982031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.982181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.982217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.982334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.982368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.982516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.982551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.982670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.982718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.982866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.982903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.983028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.983076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.983204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.983241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.983413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.983453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.983584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.983622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.983754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.983791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.983931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.983965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.984097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.984131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.984264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.984298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.984422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.984459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.984622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.984674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.984792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.984842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.984984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.985019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.985156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.985200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.985311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.985349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.985529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.985567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.985717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.985755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.985931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.985980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.986150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.986199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-12-06 01:15:16.986338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-12-06 01:15:16.986373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.986584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.986619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.986766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.986804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.986957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.987004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.987125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.987161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.987407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.987469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.987618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.987655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.987888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.987922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.988057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.988090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.988239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.988274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.988402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.988435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.988575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.988609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.988761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.988806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.988993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.989026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.989158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.989191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.989334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.989367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.989501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.989542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.989686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.989720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.989858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.989893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.990010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.990044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.990181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.990221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.990392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.990425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.990566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.990600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.990762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.990808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.991000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.991049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.991201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.991248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.991407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.991444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.991580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.991615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.991764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.991802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.991975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.992010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.992183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.992223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.992366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.992402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.992579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.992614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-12-06 01:15:16.992724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-12-06 01:15:16.992759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.992923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.992972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.993090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.993141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.993305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.993348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.993519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.993553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.993714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.993747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.993875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.993910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.994017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.994052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.994167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.994202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.994338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.994371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.994537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.994571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.994742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.994777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.994899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.994964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.995135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.995177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.995331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.995364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.995531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.995565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.995701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.995735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.995858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.995894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.996007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.996041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.996175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.996210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.996349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.996382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.996555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.996589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.996728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.996762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.996885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.996920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.997068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.997126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.997294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.997331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.997463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.997498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.997640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.997675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.997844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.997879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.998097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.998132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.998302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.998337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.998481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.998516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.998627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.998662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.998779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.998830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.536 [2024-12-06 01:15:16.998939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.536 [2024-12-06 01:15:16.998974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.536 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:16.999103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:16.999139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:16.999252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:16.999286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:16.999421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:16.999454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:16.999568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:16.999609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:16.999756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:16.999790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:16.999909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:16.999944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.000070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.000115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.000286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.000325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.000432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.000465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.000616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.000649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.000770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.000808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.001071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.001126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.001293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.001331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.001549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.001584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.001721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.001756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.001909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.001944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.002082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.002126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.002283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.002318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.002452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.002486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.002656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.002690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.002799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.002849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.002975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.003010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.003147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.003180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.003317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.003351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.003488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.003528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.003676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.003710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.003846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.003881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.004017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.004050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.004191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.004224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.004360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.004397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.004562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.004596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.004731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.004764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.004936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.004985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.005119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.537 [2024-12-06 01:15:17.005157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.537 qpair failed and we were unable to recover it. 00:37:44.537 [2024-12-06 01:15:17.005325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.005360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.005471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.005505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.005637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.005672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.005824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.005859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.006000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.006035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.006153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.006186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.006300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.006343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.006469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.006502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.006621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.006660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.006859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.006894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.007032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.007067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.007204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.007238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.007393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.007427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.007581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.007624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.007735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.007769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.007886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.007921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.008050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.008098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.008237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.008273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.008385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.008431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.008572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.008605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.008775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.008810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.008979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.009029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.009213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.009249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.009368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.009401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.009536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.009570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.009731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.009765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.009923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.009958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.010073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.010108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.010242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.010275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.010440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.010473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.010644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.010682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.010822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.010855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.010964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.010997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.538 qpair failed and we were unable to recover it. 00:37:44.538 [2024-12-06 01:15:17.011145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.538 [2024-12-06 01:15:17.011177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.011354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.011401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.011554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.011591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.011940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.011979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.012129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.012163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.012304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.012338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.012490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.012524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.012654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.012692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.012846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.012881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.012994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.013027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.013136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.013181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.013295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.013330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.013468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.013502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.013620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.013657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.013793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.013837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.013975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.014010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.014183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.014218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.014339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.014375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.014484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.014518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.014654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.014690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.014827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.014861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.015018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.015052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.015198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.015243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.015354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.015388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.015495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.015529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.015643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.015679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.015881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.015917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.016047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.016094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.016240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.016276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.016501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.016535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.016681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.016715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.539 [2024-12-06 01:15:17.016940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.539 [2024-12-06 01:15:17.016976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.539 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.017086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.017120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.017255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.017289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.017457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.017518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.017705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.017738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.017908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.017969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.018160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.018217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.018358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.018402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.018573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.018611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.018765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.018803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.019009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.019044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.019149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.019182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.019294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.019327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.019464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.019498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.019661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.019699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.019881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.019931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.020078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.020129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.020277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.020314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.020446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.020482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.020657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.020720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.020907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.020942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.021083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.021118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.021253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.021287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.021398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.021431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.021573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.021607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.021744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.021782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.021943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.021977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.022087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.022122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.022276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.022310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.022439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.022472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.022587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.022622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.022777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.022826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.022985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.023019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.023137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.023174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.023285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.540 [2024-12-06 01:15:17.023320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.540 qpair failed and we were unable to recover it. 00:37:44.540 [2024-12-06 01:15:17.023477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.023516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.023643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.023683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.023824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.023860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.023991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.024040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.024193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.024228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.024453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.024511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.024672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.024706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.024872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.024907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.025049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.025083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.025306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.025340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.025461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.025495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.025666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.025703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.025844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.025896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.026001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.026036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.026176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.026210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.026373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.026406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.026539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.026580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.026744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.026781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.026965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.027014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.027185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.027233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.027351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.027389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.027487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.027526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.027670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.027707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.027878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.027912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.028132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.028169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.028368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.028410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.028618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.028656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.028791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.028861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.028975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.029031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.029171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.029205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.029364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.029402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.541 [2024-12-06 01:15:17.029530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.541 [2024-12-06 01:15:17.029568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.541 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.029724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.029769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.029905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.029954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.030093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.030144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.030289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.030326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.030506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.030543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.030680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.030720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.030873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.030908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.031014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.031048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.031201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.031237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.031385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.031424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.031547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.031586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.031785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.031866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.031984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.032020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.032142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.032176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.032312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.032347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.032489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.032526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.032650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.032688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.032837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.032893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.033050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.033088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.033234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.033277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.033416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.033454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.033635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.033672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.033825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.033878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.034024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.034060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.034215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.034258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.034398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.034432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.034586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.034619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.034756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.034811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.034978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.035012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.035154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.035192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.035358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.035391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.035505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.035539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.035677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.035731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.035876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.035910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.036049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.542 [2024-12-06 01:15:17.036082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.542 qpair failed and we were unable to recover it. 00:37:44.542 [2024-12-06 01:15:17.036239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.036274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.036404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.036441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.036562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.036607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.036738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.036776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.036948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.037002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.037201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.037254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.037407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.037449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.037593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.037632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.037807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.037850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.038015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.038068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.038287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.038340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.038572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.038649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.038823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.038859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.038977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.039012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.039198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.039265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.039431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.039490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.039668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.039706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.039860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.039895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.040000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.040033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.040141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.040191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.040350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.040387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.040577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.040611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.040743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.040776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.040925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.040978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.041120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.041187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.041327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.041370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.041535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.041580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.041700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.041736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.041935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.041976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.042128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.042166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.042284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.042322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.042454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.042504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.042623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.543 [2024-12-06 01:15:17.042658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.543 qpair failed and we were unable to recover it. 00:37:44.543 [2024-12-06 01:15:17.042768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.042812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.043037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.043083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.043244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.043283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.043431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.043470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.043614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.043653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.043809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.043850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.043974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.044012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.044150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.044198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.044436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.044475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.044628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.044667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.044791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.044845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.044963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.044998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.045133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.045176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.045316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.045351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.045531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.045570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.045723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.045758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.045898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.045937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.046081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.046134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.046353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.046388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.046559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.046594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.046728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.046763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.046882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.046917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.047027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.047061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.047221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.047256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.047394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.047429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.047596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.047632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.047739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.047773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.544 [2024-12-06 01:15:17.047934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.544 [2024-12-06 01:15:17.047983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.544 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.048115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.048150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.048372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.048411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.048536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.048575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.048729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.048776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.048952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.048987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.049153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.049191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.049347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.049395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.049513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.049548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.049701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.049736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.049884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.049918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.050037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.050088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.050258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.050295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.050448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.050486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.050650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.050689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.050825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.050877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.050995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.051045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.051200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.051238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.051390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.051427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.051582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.051617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.051733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.051767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.051920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.051958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.052106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.052143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.052280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.052328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.052478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.052517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.052636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.052689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.052809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.052852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.052988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.053039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.053224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.053261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.053424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.053462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.053594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.053628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.053759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.053794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.053926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.053960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.054080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.545 [2024-12-06 01:15:17.054114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.545 qpair failed and we were unable to recover it. 00:37:44.545 [2024-12-06 01:15:17.054241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.054285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.054394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.054429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.054590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.054627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.054741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.054779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.054936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.054975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.055120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.055158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.055279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.055323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.055504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.055557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.055732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.055769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.055920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.055954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.056095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.056137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.056292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.056330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.056442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.056480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.056639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.056672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.056861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.056910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.057032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.057079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.057231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.057268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.057411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.057446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.057563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.057597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.057753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.057809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.057955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.057993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.058102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.058137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.058297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.058331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.058483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.058531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.058687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.058724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.058896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.058930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.059044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.059077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.059227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.059260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.059391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.059428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.059572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.059609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.059739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.059777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.059929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.059967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.060120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.060157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.060301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.546 [2024-12-06 01:15:17.060339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.546 qpair failed and we were unable to recover it. 00:37:44.546 [2024-12-06 01:15:17.060507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.060546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.060701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.060734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.060870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.060905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.061028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.061066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.061226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.061263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.061409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.061446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.061572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.061606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.061750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.061798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.061942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.061991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.062180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.062218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.062333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.062380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.062519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.062571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.062720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.062758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.062914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.062950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.063086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.063120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.063288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.063322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.063454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.063487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.063600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.063634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.063792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.063841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.063985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.064021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.064190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.064224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.064353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.064392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.064533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.064598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.064740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.064793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.064960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.065001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.065137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.065184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.065354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.065418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.065551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.065586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.065744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.065778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.065928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.065964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.066102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.066135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.066276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.066309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.547 [2024-12-06 01:15:17.066418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.547 [2024-12-06 01:15:17.066452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.547 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.066585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.066619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.066716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.066749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.066927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.066962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.067059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.067093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.067202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.067235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.067348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.067381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.067559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.067592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.067736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.067770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.067926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.067961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.068080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.068123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.068262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.068295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.068450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.068487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.068631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.068668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.068788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.068841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.069024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.069063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.069246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.069283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.069429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.069465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.069587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.069624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.069784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.069827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.069968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.070002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.070159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.070206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.070327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.070363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.070501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.070536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.070690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.070728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.070876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.070910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.071052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.071086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.071209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.071244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.071353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.071387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.071529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.071581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.071715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.071754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.071911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.071945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.072080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.072120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.072259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.548 [2024-12-06 01:15:17.072292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.548 qpair failed and we were unable to recover it. 00:37:44.548 [2024-12-06 01:15:17.072450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.072493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.072675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.072713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.072873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.072908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.073068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.073105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.073245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.073283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.073452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.073489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.073643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.073676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.073813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.073853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.073959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.073994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.074129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.074162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.074298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.074332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.074434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.074468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.074578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.074611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.074768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.074824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.074980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.075015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.075129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.075163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.075316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.075349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.075473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.075511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.075656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.075693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.075865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.075901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.076055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.076089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.076216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.076250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.076426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.076479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.076626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.076664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.076848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.076898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.077033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.077072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.077200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.077238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.549 [2024-12-06 01:15:17.077436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.549 [2024-12-06 01:15:17.077501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.549 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.077684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.077719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.077862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.077897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.078026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.078064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.078220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.078277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.078390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.078423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.078528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.078562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.078677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.078710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.078839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.078893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.079012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.079049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.079271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.079308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.079453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.079490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.079651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.079685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.079810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.079881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.080007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.080051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.080225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.080282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.080462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.080521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.080644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.080678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.080826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.080881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.081035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.081073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.081232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.081270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.081465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.081528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.081715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.081748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.081893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.081931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.082073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.082117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.082315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.082352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.082533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.082592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.082723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.082757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.082876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.082910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.083046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.083083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.083234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.083272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.083424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.083461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.083586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.083638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.083771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.550 [2024-12-06 01:15:17.083808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.550 qpair failed and we were unable to recover it. 00:37:44.550 [2024-12-06 01:15:17.083965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.083999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.084120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.084154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.084296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.084330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.084460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.084511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.084670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.084704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.084849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.084904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.085113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.085179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.085372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.085436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.085605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.085646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.085781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.085823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.085997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.086034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.086201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.086243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.086372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.086407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.086519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.086552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.086660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.086694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.086798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.086863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.086983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.087020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.087142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.087193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.087417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.087455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.087600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.087675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.087795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.087843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.087975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.088028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.088188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.088226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.088422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.088456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.088603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.088638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.088775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.088809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.088945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.088983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.089176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.089220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.089340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.089378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.089506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.089544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.089673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.089706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.089829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.089875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.090002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.090041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.551 [2024-12-06 01:15:17.090253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.551 [2024-12-06 01:15:17.090292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.551 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.090445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.090483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.090607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.090646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.090806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.090847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.090998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.091037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.091168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.091205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.091408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.091444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.091561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.091610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.091719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.091753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.091861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.091895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.092044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.092100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.092235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.092284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.092469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.092506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.092694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.092734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.092886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.092940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.093090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.093128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.093275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.093313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.093512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.093571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.093696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.093731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.093870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.093910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.094081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.094134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.094323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.094363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.094533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.094591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.094737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.094775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.094973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.095019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.095260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.095318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.095486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.095559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.095691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.095742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.095884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.095920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.096058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.096115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.096279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.096340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.096606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.096664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.096812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.096875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.097015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.552 [2024-12-06 01:15:17.097050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.552 qpair failed and we were unable to recover it. 00:37:44.552 [2024-12-06 01:15:17.097209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.097244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.097387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.097422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.097609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.097656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.097812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.097857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.097994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.098041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.098192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.098228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.098393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.098431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.098603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.098641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.098885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.098920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.099051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.099085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.099259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.099294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.099420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.099455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.099584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.099623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.099736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.099776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.099972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.100021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.100200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.100237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.100397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.100431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.100542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.100576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.100759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.100803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.100950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.100998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.101143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.101179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.101304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.101338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.101471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.101515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.101654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.101688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.101784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.101828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.101949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.101983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.102129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.102164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.102269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.102302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.102463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.102496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.102626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.102663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.102809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.102850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.102969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.103004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.103109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.103143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.103373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.553 [2024-12-06 01:15:17.103428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.553 qpair failed and we were unable to recover it. 00:37:44.553 [2024-12-06 01:15:17.103557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.103592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.103737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.103787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.103952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.103987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.104124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.104159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.104317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.104352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.104487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.104526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.104696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.104734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.104915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.104951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.105090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.105123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.105253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.105286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.105448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.105481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.105664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.105701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.105878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.105912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.106042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.106077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.106249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.106284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.106403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.106439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.106571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.106605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.106750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.106788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.106960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.106995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.107138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.107172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.107281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.107327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.107471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.107506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.107670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.107705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.107810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.107851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.107982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.108030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.108176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.108215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.108358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.108393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.554 [2024-12-06 01:15:17.108526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.554 [2024-12-06 01:15:17.108561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.554 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.108671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.108705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.108841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.108877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.109018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.109053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.109189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.109223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.109377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.109411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.109505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.109538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.109676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.109710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.109853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.109887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.110026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.110062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.110197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.110231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.110387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.110425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.110599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.110637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.110789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.110834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.110989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.111023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.111158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.111193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.111305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.111356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.111578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.111615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.111757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.111807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.111969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.112002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.112142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.112175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.112338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.112372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.112503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.112535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.112662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.112700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.112822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.112873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.113029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.113077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.113219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.113255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.113396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.113431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.113567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.113600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.113760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.113798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.113989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.114023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.114193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.114227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.114353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.114387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.114492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.114527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.114665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.555 [2024-12-06 01:15:17.114698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.555 qpair failed and we were unable to recover it. 00:37:44.555 [2024-12-06 01:15:17.114833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.114868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.114969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.115003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.115145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.115180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.115331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.115401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.115551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.115610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.115780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.115824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.115973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.116007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.116130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.116179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.116299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.116337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.116467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.116503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.116660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.116695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.116846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.116882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.117042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.117076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.117186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.117221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.117387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.117421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.117582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.117616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.117755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.117790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.117938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.117986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.118148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.118184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.118317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.118351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.118487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.118521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.118674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.118711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.118872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.118906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.119037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.119070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.119246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.119284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.119402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.119439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.119595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.119633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.119755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.119789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.119940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.119973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.120105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.120138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.120289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.120323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.120451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.120484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.120660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.120696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.120840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.120874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.120993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.121027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.121161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.556 [2024-12-06 01:15:17.121195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.556 qpair failed and we were unable to recover it. 00:37:44.556 [2024-12-06 01:15:17.121303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.121337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.121476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.121509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.121673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.121707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.121872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.121906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.122040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.122073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.122172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.122206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.122346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.122380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.122524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.122563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.122693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.122727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.122888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.122922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.123061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.123096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.123256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.123290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.123423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.123456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.123615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.123648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.123792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.123833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.123997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.124030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.124134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.124168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.124276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.124309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.124458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.124491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.124628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.124661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.124822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.124870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.125026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.125063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.125207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.125242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.125375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.125410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.125553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.125587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.125721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.125754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.125901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.125936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.126063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.126097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.126263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.126296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.126433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.126467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.126599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.126633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.557 [2024-12-06 01:15:17.126776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.557 [2024-12-06 01:15:17.126809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.557 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.126987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.127021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.127182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.127215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.127345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.127379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.127488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.127523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.127630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.127663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.127839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.127888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.128032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.128066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.128184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.128218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.128326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.128359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.128461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.128494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.128608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.128641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.128787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.128832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.128970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.129004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.129164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.129198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.129360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.129393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.129558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.129597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.129740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.129774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.129918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.129952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.130055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.130088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.130225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.130259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.130372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.130407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.130550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.130584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.130734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.130771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.130965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.130999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.131103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.131137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.131264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.131298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.131427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.131460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.131623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.131657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.131755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.131789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.131958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.132006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.132170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.132218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.132388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.132424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.132556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.132591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.132751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.558 [2024-12-06 01:15:17.132787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.558 qpair failed and we were unable to recover it. 00:37:44.558 [2024-12-06 01:15:17.132929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.132976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.133164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.133198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.133330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.133363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.133503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.133536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.133654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.133690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.133831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.133882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.134014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.134062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.134255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.134297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.134489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.134588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.134739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.134777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.134943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.134977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.135096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.135130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.135250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.135284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.135420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.135458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.135609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.135646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.135807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.135857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.135991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.136026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.136139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.136174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.136282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.136317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.136487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.136525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.136675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.136718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.136886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.136925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.137036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.137069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.137203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.137238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.137405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.137438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.137549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.137587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.137719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.137773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.137967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.138016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.138141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.559 [2024-12-06 01:15:17.138178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.559 qpair failed and we were unable to recover it. 00:37:44.559 [2024-12-06 01:15:17.138315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.138350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.138480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.138514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.138694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.138733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.138872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.138935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.139111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.139149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.139274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.139312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.139479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.139536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.139678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.139715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.139874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.139908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.140006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.140039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.140153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.140187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.140340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.140377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.140546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.140583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.140730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.140767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.140920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.140956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.141082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.141131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.141292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.141333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.141561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.141601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.141760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.141794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.141957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.141992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.142105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.142140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.142276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.142312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.142471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.142506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.142664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.142714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.142855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.142901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.143067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.143101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.143235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.143268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.143406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.143459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.143631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.143668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.143797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.143837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.143978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.144015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.144134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.144182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.144390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.144436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.144605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.560 [2024-12-06 01:15:17.144674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.560 qpair failed and we were unable to recover it. 00:37:44.560 [2024-12-06 01:15:17.144850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.144901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.145039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.145073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.145210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.145262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.145431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.145494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.145608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.145668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.145862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.145899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.146046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.146080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.146183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.146217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.146382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.146416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.146529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.146562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.146719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.146757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.146943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.146991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.147147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.147183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.147288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.147323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.147435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.147468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.147583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.147618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.147759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.147797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.147992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.148026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.148161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.148194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.148349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.148386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.148562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.148601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.148740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.148774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.148912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.148946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.149060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.149094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.149227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.149261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.149447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.149505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.149666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.149716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.149838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.149874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.149980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.150014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.150126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.150159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.150296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.150330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.150510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.150575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.150721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.561 [2024-12-06 01:15:17.150759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.561 qpair failed and we were unable to recover it. 00:37:44.561 [2024-12-06 01:15:17.150926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.150960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.151090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.151123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.151254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.151288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.151430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.151463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.151582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.151616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.151748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.151786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.151943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.151992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.152133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.152170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.152316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.152352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.152498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.152533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.152747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.152782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.152935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.152983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.153093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.153130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.153275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.153309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.153451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.153484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.153619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.153655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.153789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.153838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.153985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.154020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.154152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.154185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.154320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.154354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.154455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.154489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.154662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.154697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.154804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.154846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.155030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.155065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.155200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.155233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.155365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.155398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.155534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.155567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.155717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.155751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.155890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.155924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.156050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.156099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.156279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.156316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.156455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.156491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.156653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.156688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.562 [2024-12-06 01:15:17.156830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.562 [2024-12-06 01:15:17.156866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.562 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.156975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.157008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.157137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.157170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.157280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.157315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.157461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.157494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.157655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.157691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.157794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.157835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.157977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.158011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.158110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.158143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.158288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.158321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.158453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.158488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.158606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.158641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.158794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.158859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.159038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.159075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.159219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.159256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.159398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.159432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.159570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.159604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.159771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.159806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.159951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.159984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.160116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.160149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.160331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.160366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.160477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.160511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.160662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.160699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.160912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.160961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.161145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.161194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.161332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.161368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.161517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.161551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.161685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.161720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.161858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.161893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.162055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.162090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.162228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.162262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.162400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.162434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.162541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.162574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.563 qpair failed and we were unable to recover it. 00:37:44.563 [2024-12-06 01:15:17.162707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.563 [2024-12-06 01:15:17.162741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.162850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.162890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.163025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.163059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.163226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.163260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.163409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.163442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.163614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.163647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.163785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.163825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.163976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.164010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.164158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.164193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.164325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.164359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.164518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.164552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.164706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.164738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.164905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.164939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.165041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.165075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.165206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.165240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.165348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.165381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.165541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.165574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.165713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.165747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.165887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.165922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.166086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.166123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.166283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.166317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.166455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.166489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.166668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.166719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.166895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.166929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.167087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.167121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.167287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.167321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.167478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.167511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.167643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.167676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-12-06 01:15:17.167825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-12-06 01:15:17.167859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.167969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.168002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.168110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.168144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.168301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.168334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.168465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.168499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.168669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.168702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.168806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.168850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.168979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.169013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.169183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.169216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.169362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.169395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.169537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.169570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.169705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.169738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.169902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.169936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.170080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.170115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.170240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.170273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.170377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.170411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.170564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.170598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.170706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.170739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.170876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.170911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.171068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.171102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.171205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.171237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.171373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.171407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.171548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.171581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.171686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.171719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.171847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.171881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.172013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.172046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.172154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.172187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.172322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.172355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.172515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.172549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.172672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.172705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.172866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.172900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.173035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.173073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.173208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-12-06 01:15:17.173241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-12-06 01:15:17.173349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.173382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.173552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.173586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.173714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.173747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.173880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.173914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.174035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.174084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.174233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.174269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.174419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.174454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.174557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.174592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.174752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.174786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.174942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.174991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.175134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.175171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.175280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.175315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.175436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.175470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.175606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.175639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.175767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.175805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.175969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.176002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.176139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.176172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.176302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.176336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.176473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.176506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.176626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.176659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.176797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.176838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.177019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.177067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.177224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.177261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.177374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.177420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.177562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.177597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.177710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.177745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.177854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.177888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.178009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.178042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.178152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.178186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.178324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.178357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.178492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.178525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.178666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.178701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.178806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.178846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.178962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-12-06 01:15:17.178998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-12-06 01:15:17.179112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.179146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.179310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.179344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.179480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.179513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.179617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.179651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.179770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.179810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.179975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.180024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.180212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.180250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.180363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.180429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.180565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.180600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.180821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.180859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.181000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.181036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.181171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.181205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.181313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.181347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.181452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.181486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.181620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.181670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.181821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.181858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.181970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.182004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.182121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.182154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.182292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.182325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.182427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.182460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.182557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.182590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.182700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.182733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.182841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.182876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.183016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.183049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.183152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.183185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.183297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.183331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.183437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.183470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.183602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.183635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.183768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.183802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.183953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.183986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.184095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.184129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.184248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.184282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.184409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.184442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-12-06 01:15:17.184556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-12-06 01:15:17.184590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.184717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.184767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.184946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.184980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.185114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.185149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.185265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.185299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.185445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.185479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.185594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.185629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.185730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.185764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.185897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.185946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.186068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.186104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.186248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.186282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.186423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.186463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.186564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.186598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.186734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.186769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.186910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.186944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.187055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.187088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.187203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.187238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.187374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.187408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.187536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.187569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.187672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.187707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.187851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.187886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.187994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.188028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.188135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.188169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.188305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.188338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.188472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.188508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.188653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.188687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.188827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.188861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.188973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.189006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.189147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.189182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.189311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.189344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.189451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.189485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.189625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.189659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.189768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.189802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-12-06 01:15:17.189946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-12-06 01:15:17.189980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.190086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.190120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.190253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.190286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.190432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.190465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.190577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.190610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.190743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.190776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.190900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.190934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.191074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.191108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.191211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.191245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.191355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.191388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.191493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.191527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.191640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.191676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.191810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.191850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.191957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.191991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.192114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.192148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.192283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.192317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.192429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.192463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.192602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.192636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.192754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.192792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.192914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.192949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.193118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.193151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.193258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.193291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.193397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.193431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.193570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.193603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.193738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.193771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.193885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.193919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.194049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.194082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.194187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.194221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.194345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.194379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.194517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.194550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.194662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.194696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.194804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.194846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.194981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.195014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-12-06 01:15:17.195152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-12-06 01:15:17.195186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.195325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.195359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.195462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.195496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.195603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.195637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.195770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.195804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.195929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.195963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.196095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.196129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.196270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.196303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.196421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.196456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.196590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.196623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.196735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.196771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.197029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.197063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.197178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.197216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.197327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.197360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.197494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.197527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.197659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.197695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.197829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.197880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.198012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.198046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.198164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.198198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.198308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.198341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.198454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.198488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.198596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.198630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.198759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.198792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.198939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.198973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.199107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.199140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.199240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.199273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.199394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.199428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.199564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.199597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.199732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-12-06 01:15:17.199765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-12-06 01:15:17.199892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.199926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.200065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.200098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.200207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.200240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.200339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.200373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.200515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.200548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.200695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.200728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.200898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.200932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.201044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.201078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.201224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.201258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.201433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.201466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.201614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.201647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.201754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.201787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.201903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.201937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.202097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.202145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.202313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.202349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.202488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.202522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.202664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.202698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.202806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.202849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.202977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.203011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.203143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.203176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.203296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.203332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.203461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.203495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.203637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.203687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.203799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.203869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.203997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.204031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.204138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.204172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.204309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.204342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.204444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.204478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.204593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.204626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.204767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.204802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.204922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.204956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.205092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.205126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.205234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.205267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.205378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-12-06 01:15:17.205412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-12-06 01:15:17.205521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.205556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.205686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.205719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.205856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.205891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.206012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.206047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.206152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.206185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.206316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.206349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.206452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.206485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.206642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.206675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.206788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.206828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.206939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.206972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.207073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.207106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.207261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.207294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.207424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.207457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.207557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.207590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.207719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.207753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.207878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.207915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.208040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.208076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.208183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.208216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.208339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.208373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.208505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.208551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.208683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.208718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.208828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.208863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.208968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.209002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.209110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.209144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.209252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.209286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.209409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.209443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.209582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.209616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.209784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.209832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.209990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.210023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.210134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.210172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.210273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.210306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.210412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.210445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.210577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.210610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-12-06 01:15:17.210713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-12-06 01:15:17.210746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.210874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.210908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.211046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.211080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.211211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.211244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.211341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.211374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.211490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.211523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.211634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.211668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.211798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.211846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.211948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.211982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.212121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.212154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.212274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.212307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.212414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.212448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.212583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.212617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.212761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.212797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.212968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.213002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.213106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.213139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.213244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.213277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.213424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.213458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.213594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.213628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.213733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.213766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.213910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.213944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.214051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.214085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.214201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.214235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.214337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.214370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.214484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.214518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.214621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.214656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.214794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-12-06 01:15:17.214836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-12-06 01:15:17.214953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-12-06 01:15:17.214986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-12-06 01:15:17.215185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-12-06 01:15:17.215219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-12-06 01:15:17.215334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-12-06 01:15:17.215367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-12-06 01:15:17.215485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-12-06 01:15:17.215519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-12-06 01:15:17.215647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-12-06 01:15:17.215684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.215824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.215859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.215998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.216032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.216169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.216202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.216307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.216340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.216441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.216479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.216585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.216618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.216726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.216759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.216879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.216913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.217016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.217051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.217156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.217189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.217316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.217364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.217487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.217522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.217660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.217694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.217805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.217847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.217956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.217989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.218101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.218134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.218241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.218274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.218387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.218421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.218539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.218575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.218711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.218748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.218920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.218954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.219094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.219127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.219242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.219275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.219398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.219431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.219534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.219567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.219671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.219705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.219835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.219872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.219983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.220017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.220145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.220179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.220293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.220328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.220444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.220479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.220591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.220625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.220730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.220763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.220928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.220961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.221104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.221139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.221272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.221305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.221414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.221448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.221556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.221589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.221732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.221765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.221906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.221940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.222052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.222086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.222222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.222256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.222391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.222427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-12-06 01:15:17.222532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-12-06 01:15:17.222566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.222671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.222709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.222828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.222863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.222972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.223006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.223114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.223147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.223284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.223318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.223433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.223466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.223580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.223615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.223752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.223785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.223936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.223970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.224100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.224133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.224272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.224306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.224413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.224446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.224582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.224615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.224715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.224747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.224901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.224936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.225066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.225099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.225216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.225249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.225352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.225387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.225530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.225564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.225732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.225764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.225867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.225901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.226009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.226042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.226150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.226183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.226290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.226324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.226442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.226476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.226609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.226642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.226749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.226784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.226902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.226936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.227070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.227103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.227215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.227248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.227376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.227409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.227518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.227552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.227658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.227708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.227871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.227906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.228053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.228087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.228186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.228219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.228333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.228366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.228523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.228556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.228701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.228734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.228859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.228893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.229025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.229063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.229193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.229227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.229328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.229363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.229473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.229507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.229636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.229670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.229783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.229823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.229948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-12-06 01:15:17.229982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-12-06 01:15:17.230086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.230119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.230229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.230262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.230376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.230410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.230545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.230579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.230711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.230744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.230851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.230885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.231048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.231081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.231224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.231257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.231365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.231398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.231531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.231564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.231698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.231731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.231887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.231922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.232031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.232064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.232170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.232210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.232343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.232378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.232492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.232525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.232628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.232662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.232796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.232837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.232948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.232981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.233089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.233122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.233237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.233270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.233407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.233440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.233542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.233576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.233734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.233773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.233914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.233949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.234087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.234120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.234222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.234256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.234391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.234426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.234535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.234568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.234664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.234698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.234867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.234903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.235034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.235078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.235189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.235223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.235323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.235362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.235501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.235536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.235673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.235708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.235831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.235865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.235995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.236029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.236161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.236194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.236339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.236372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.236484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.236517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.236625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.236658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.236828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.236862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.236975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.237009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.237111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.237145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.237236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.237269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.237399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-12-06 01:15:17.237432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-12-06 01:15:17.237573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.237608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.237714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.237748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.237882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.237916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.238030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.238064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.238199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.238232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.238361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.238394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.238535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.238569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.238708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.238742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.238855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.238889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.239010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.239043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.239159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.239192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.239347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.239380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.239499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.239533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.239676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.239710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.239820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.239855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.239984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.240017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.240185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.240219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.240358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.240391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.240501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.240534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.240693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.240726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.240845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.240880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.241020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.241053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.241165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.241199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.241311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.241345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.241453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.241487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.241600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.241634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.241745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.241783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.241959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.241993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.242101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.242136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.242272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.242306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.242418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.242451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.242609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.242643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.242782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.242824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.242935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.242968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.243100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.243134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.243265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.243298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.243448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.243482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.243587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.243620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.243722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.243755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-12-06 01:15:17.243874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-12-06 01:15:17.243908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.244098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.244146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.244317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.244353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.244488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.244522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.244637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.244671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.244773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.244807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.244917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.244951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.245058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.245091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.245245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.245279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.245423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.245458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.245572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.245606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.245719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.245769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.245951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.245985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.246099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.246132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.246270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.246303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.246457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.246491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.246596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.246629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.246736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.246769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.246888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.246922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.247019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.247052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.247185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.247218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.247328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.247361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.247492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.247525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.247632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.247665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.247772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.247805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.247995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.248043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.248189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.248225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.248364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.248404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.248548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.248582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.248706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.248740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.248852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.248888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.248993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.249027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.249134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.249167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.249304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.249338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.249465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.249498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.249608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.249642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.249749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.249782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.249905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.249939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.250083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.250117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.250231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.250264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.250378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.250411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.250529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.250563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.250725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.250758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.250874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.250908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.251043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.251077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.251216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.251249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.251356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.251390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.251514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.251547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-12-06 01:15:17.251682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-12-06 01:15:17.251715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.251881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.251915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.252019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.252052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.252178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.252212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.252318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.252352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.252496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.252529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.252699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.252732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.252849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.252883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.253013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.253046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.253180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.253214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.253349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.253382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.253482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.253515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.253645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.253678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.253839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.253874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.254009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.254043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.254179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.254212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.254322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.254356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.254498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.254531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.254666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.254699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.254833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.254871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.255081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.255114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.255250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.255283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.255394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.255427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.255588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.255621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.255754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.255788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.255962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.256011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.256132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.256170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.256305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.256340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.256476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.256511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.256641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.256676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.256842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.256877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.256983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.257017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.257183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.257217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.257354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.257388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.257522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.257555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.257685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.257718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.257836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.257871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.258027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.258061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.258192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.258226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.258363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.258397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.258511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.258545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.258674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.258708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.258825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.258858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.258995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.259029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.259168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.259202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.259311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.259345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.259492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.259526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.259664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-12-06 01:15:17.259697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-12-06 01:15:17.259850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.259884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.259992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.260027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.260167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.260201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.260334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.260367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.260517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.260551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.260654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.260700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.260833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.260867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.261003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.261036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.261175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.261210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.261367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.261400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.261529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.261563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.261696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.261734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.261878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.261913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.262036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.262084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.262222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.262258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.262367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.262401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.262535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.262569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.262734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.262768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.262885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.262919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.263034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.263068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.263203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.263238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.263339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.263374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.263490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.263523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.263633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.263668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.263811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.263852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.263969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.264003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.264135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.264168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.264303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.264337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.264472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.264506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.264637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.264672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.264805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.264857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.264964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.264998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.265137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.265172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.265308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.265341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.265474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.265507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.265622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.265656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.265800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.265845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.265944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.265978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.266088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.266122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.266256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.266289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.266416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.266449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.266553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.266586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.266711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.266745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.266954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.266989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.267118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.267151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.267286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.267319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.267426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-12-06 01:15:17.267460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-12-06 01:15:17.267592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.267625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.267755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.267805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.267974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.268008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.268121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.268155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.268362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.268400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.268559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.268593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.268692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.268726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.268863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.268897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.269009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.269044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.269213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.269247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.269384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.269418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.269524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.269559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.269690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.269723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.269839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.269873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.270013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.270046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.270154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.270187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.270330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.270363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.270570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.270603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.270714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.270748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.270881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.270915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.271075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.271109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.271213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.271246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.271405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.271439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.271546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.271579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.271824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.271874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.272016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.272052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.272215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.272251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.272420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.272455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.272591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.272625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.272837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.272873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.273005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.273040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.273187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.273221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.273359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.273394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.273529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.273564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.273689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.273737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.273907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.273944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.274058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.274094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.274254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.274289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.274439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.274475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.274587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.274621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.274755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.274789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.274936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.274971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-12-06 01:15:17.275108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-12-06 01:15:17.275141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.275300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.275334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.275464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.275503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.275645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.275679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.275789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.275829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.275965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.275999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.276135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.276168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.276303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.276337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.276466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.276499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.276604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.276638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.276768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.276802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.276925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.276958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.277082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.277115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.277216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.277249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.277351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.277384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.277525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.277559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.277696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.277730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.277868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.277902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.278056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.278104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.278247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.278283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.278437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.278473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.278610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.278645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.278751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.278785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.278930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.278965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.279134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.279168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.279298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.279331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.279543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.279577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.279708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.279741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.279903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.279937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.280043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.280082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.280194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.280228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.280389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.280422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.280525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.280559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.280695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.280729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.280832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.280866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.280995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.281028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.281181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.281230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.281384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.281421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.281569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.281605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.281742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.281777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.281927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.281962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.282102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.282137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.282269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.282304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.282441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.282476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.282614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.282649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.282762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.282797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.282964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.282998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-12-06 01:15:17.283133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-12-06 01:15:17.283166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.283276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.283309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.283439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.283472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.283596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.283629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.283742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.283775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.283942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.283976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.284087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.284120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.284238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.284273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.284411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.284445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.284610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.284643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.284741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.284774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.284894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.284928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.285037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.285070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.285230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.285263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.285398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.285433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.285545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.285578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.285727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.285765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.285899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.285933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.286064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.286097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.286224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.286257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.286368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.286402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.286563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.286597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.286715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.286754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.286863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.286898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.287033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.287066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.287199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.287232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.287393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.287427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.287564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.287598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.287709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.287742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.287902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.287935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.288070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.288104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.288235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.288268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.288425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.288458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.288618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.288652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.288759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.288792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.288958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.289006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.289189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.289226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.289334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.289368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.289475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.289509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.289640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.289674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.289841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.289876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.290008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.290042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.290147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.290182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.290299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.290332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.290489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.290523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.290658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.290692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.290842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.290877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-12-06 01:15:17.291014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-12-06 01:15:17.291049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.291209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.291243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.291386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.291420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.291529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.291563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.291677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.291713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.291858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.291893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.292032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.292066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.292174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.292208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.292334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.292370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.292510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.292546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.292676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.292710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.292852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.292886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.293003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.293037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.293171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.293205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.293359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.293393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.293531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.293569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.293677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.293710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.293810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.293850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.293964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.293997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.294093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.294127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.294260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.294294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.294415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.294449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.294582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.294615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.294729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.294762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.294935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.294970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.295139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.295173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.295305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.295350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.295486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.295520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.295682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.295717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.295831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.295865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.296004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.296038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.296139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.296172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.296301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.296334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.296438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.296472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.296602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.296635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.296746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.296780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.296895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.296930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.297069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.297103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.297237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.297271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.297393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.297426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.297565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.297599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.297758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.297796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.297965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.297998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.298103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.298137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.298278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.298312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-12-06 01:15:17.298419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-12-06 01:15:17.298452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.298561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.298594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.298699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.298733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.298892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.298926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.299024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.299057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.299165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.299199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.299359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.299392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.299533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.299568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.299707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.299741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.299869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.299904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.300033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.300071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.300202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.300237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.300370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.300404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.300516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.300550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.300656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.300689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.300787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.300828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.300960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.300994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.301121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.301154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.301287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.301321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.301468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.301501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.301607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.301642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.301773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.301806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.301972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.302006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.302140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.302176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.302348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.302382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.302488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.302522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.302658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.302692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.302800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.302845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.303000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.303034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.303170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.303204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.303336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.303369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.303483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.303518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.303653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.303687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.303790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.303833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.303967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.304000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.304213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.304247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.304416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.304450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.304591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.304624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.304784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.304826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.304960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-12-06 01:15:17.304993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-12-06 01:15:17.305119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.305152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.305309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.305342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.305445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.305478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.305588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.305624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.305790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.305831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.305996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.306030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.306168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.306201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.306303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.306337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.306503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.306537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.306673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.306707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.306826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.306865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.307027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.307061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.307202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.307236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.307364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.307397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.307533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.307566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.307703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.307737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.307903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.307938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.308071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.308104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.308260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.308293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.308459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.308494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.308678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.308716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.308903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.308937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.309048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.309082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.309216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.309250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.309396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.309429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.309557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.309591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.309753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.309787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.309926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.309960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.310099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.310133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.310268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.310302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.310413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.310447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.310586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.310620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.310787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.310845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.310977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.311011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.311130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.311163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.311300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.311333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.311436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.311469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.311635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.311668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.311772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.311805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.311950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.311983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.312148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.312181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.312321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.312354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.312465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.312499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.312630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.312663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.312805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.312845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.312986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.313020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.313147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.313182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.313318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.313352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.313530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.313564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.313726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.313771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.313945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.313983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-12-06 01:15:17.314113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-12-06 01:15:17.314147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.314287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.314322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.314450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.314483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.314627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.314661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.314831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.314865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.314980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.315015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.315113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.315146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.315281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.315315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.315473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.315507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.315647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.315681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.315822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.315856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.315993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.316027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.316159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.316193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.316365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.316399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.316505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.316538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.316677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.316710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.316873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.316907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.317008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.317042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.317207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.317241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.317450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.317483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.317593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.317626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.317789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.317831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.317947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.317981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.318115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.318149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.318280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.318313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.318451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.318485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.318621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.318655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.318829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.318863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.318972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.319005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.319189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.319222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.319324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.319358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.319474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.319507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.319712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.319746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.319881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.319916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.320021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.320055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.320190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.320223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.320361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.320394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.320527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.320560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.320690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.320724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.320882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.320921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.321030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.321063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.321204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.321239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.321398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.321432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.321543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.321577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.321710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.321745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.321884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.321920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.322076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.322109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.322252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.322285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.322413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.322446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.322607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.322641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.322768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.322802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.322943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-12-06 01:15:17.322977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-12-06 01:15:17.323085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.323120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.323239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.323273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.323399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.323433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.323592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.323626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.323784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.323823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.323967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.324001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.324160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.324194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.324327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.324360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.324453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.324486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.324656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.324694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.324882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.324926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.325088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.325121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.325284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.325318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.325485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.325519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.325635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.325668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.325828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.325863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.325994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.326027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.326136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.326170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.326267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.326301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.326433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.326467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.326625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.326658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.326792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.326839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.326974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.327007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.327167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.327201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.327293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.327327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.327437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.327471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.327609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.327642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.327772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.327809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.327956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.327991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.328129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.328163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.328303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.328337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.328449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.328483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.328598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.328631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.328794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.328834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.328972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-12-06 01:15:17.329006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-12-06 01:15:17.329139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.329173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.329311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.329344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.329457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.329491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.329649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.329683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.329818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.329852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.329996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.330029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.330174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.330208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.330343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.330377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.330520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.330554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.330689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.330722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.330893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.330927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.331061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.331094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.331205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.331239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.331401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.331434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.331573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.331606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.331762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.331795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.331907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.331940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.332083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.332117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.332224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.332258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.332394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.332428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.332562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.332595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.332744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.332781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.332917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.332951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.333084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.333118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.333224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.333258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.333420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.333454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.333601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.333635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.333760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.333793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.333931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.333965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.334098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.334133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.334269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.334302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.334460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.334493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.334628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.334666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.334802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.334841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.334991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.335025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.335152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.335185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.335344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.335377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.335475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.335509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.335651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.335685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.335827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.335870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.335986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.336020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.336152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.336187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.336318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.336351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.336523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.336557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.336657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.336691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.336829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.336863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.337012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.337046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.337154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.337188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-12-06 01:15:17.337346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-12-06 01:15:17.337380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.337516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.337549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.337683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.337716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.337883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.337917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.338026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.338060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.338164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.338196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.338311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.338345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.338478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.338513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.338648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.338681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.338829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.338863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.338979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.339013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.339129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.339163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.339324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.339357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.339467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.339500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.339668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.339702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.339835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.339869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.340031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.340064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.340196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.340229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.340372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.340406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.340564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.340597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.340729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.340763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.340927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.340961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.341099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.341132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.341235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.341268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.341415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.341454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.341598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.341632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.341802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.341842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.341944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.341978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.342094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.342128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.342270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.342304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.342408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.342441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.342604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.342638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.342775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.342808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.342965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.342999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.343160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.343194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.343292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.343326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.343466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.343500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.343631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.343665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.343832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.343866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.343999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.344032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.344180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.344213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.344345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.344378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.344515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-12-06 01:15:17.344549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-12-06 01:15:17.344686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.344719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.344857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.344891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.344995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.345028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.345139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.345173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.345331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.345364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.345470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.345504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.345611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.345648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.345821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.345856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.345969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.346003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.346140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.346174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.346277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.346310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.346506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.346540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.346700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.346745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.346909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.346944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.347088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.347121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.347258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.347293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.347398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.347433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.347570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.347603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.347708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.347743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.347881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.347917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.348015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.348049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.348185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.348223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.348323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.348357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.348503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.348536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.348668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.348702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.348829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.348863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.348972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.349006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.349169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.349203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.349337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.349371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.349496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.349530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.349689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.349723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.349871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.349905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.350043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.350078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.350238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.350272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.350405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.350438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.350574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.350608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.350735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.350769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.350914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-12-06 01:15:17.350948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-12-06 01:15:17.351108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.351142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.351282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.351317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.351415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.351449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.351587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.351621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.351731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.351765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.351907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.351941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.352049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.352083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.352219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.352253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.352369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.352402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.352505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.352538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.352679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.352714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.352825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.352860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.352997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.353031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.353142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.353175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.353301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.353335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.353448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.353482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.353622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.353655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.353790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.353834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.353972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.354005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.354165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.354198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.354297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.354331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.354469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.354504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.354641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.354674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.354807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.354860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.354975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.355009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.355144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.355178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.355308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.355342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.355475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.355509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.355642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.355676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.355839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.355889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-12-06 01:15:17.356050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-12-06 01:15:17.356084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.356240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.356273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.356372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.356405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.356571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.356605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.356739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.356773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.356909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.356944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.357110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.357144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.357298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.357332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.357466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.357529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.357666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.357700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.357800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.357839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.357979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.358012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.358142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.358175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.358314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.358347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.358478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.358512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.358648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.358682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.358789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.358828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.358941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.358975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.359078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.359113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.359227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.359261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.359369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.359404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.359532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.359566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.359729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.359762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.359881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.359915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.360053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.360086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.360218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.360252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.360416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.360450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.360587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.360621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.360783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.360823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-12-06 01:15:17.360968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-12-06 01:15:17.361002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.880 [2024-12-06 01:15:17.361133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-12-06 01:15:17.361167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-12-06 01:15:17.361295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-12-06 01:15:17.361329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-12-06 01:15:17.361436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-12-06 01:15:17.361470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-12-06 01:15:17.361606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-12-06 01:15:17.361644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-12-06 01:15:17.361781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-12-06 01:15:17.361820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-12-06 01:15:17.361960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-12-06 01:15:17.361995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-12-06 01:15:17.362099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-12-06 01:15:17.362133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-12-06 01:15:17.362240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-12-06 01:15:17.362274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-12-06 01:15:17.362431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-12-06 01:15:17.362465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-12-06 01:15:17.362633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-12-06 01:15:17.362666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.881 [2024-12-06 01:15:17.362775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-12-06 01:15:17.362808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-12-06 01:15:17.362927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-12-06 01:15:17.362962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-12-06 01:15:17.363116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-12-06 01:15:17.363149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-12-06 01:15:17.363310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-12-06 01:15:17.363344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-12-06 01:15:17.363471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-12-06 01:15:17.363505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-12-06 01:15:17.363639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-12-06 01:15:17.363672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-12-06 01:15:17.363802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-12-06 01:15:17.363843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-12-06 01:15:17.363983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-12-06 01:15:17.364017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.882 [2024-12-06 01:15:17.364141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-12-06 01:15:17.364175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-12-06 01:15:17.364311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-12-06 01:15:17.364344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-12-06 01:15:17.364505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-12-06 01:15:17.364538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-12-06 01:15:17.364672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-12-06 01:15:17.364706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-12-06 01:15:17.364864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-12-06 01:15:17.364899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-12-06 01:15:17.365003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-12-06 01:15:17.365037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-12-06 01:15:17.365169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-12-06 01:15:17.365203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.883 [2024-12-06 01:15:17.365366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-12-06 01:15:17.365401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-12-06 01:15:17.365498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-12-06 01:15:17.365532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-12-06 01:15:17.365670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-12-06 01:15:17.365704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-12-06 01:15:17.365839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-12-06 01:15:17.365873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-12-06 01:15:17.365989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-12-06 01:15:17.366023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-12-06 01:15:17.366160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-12-06 01:15:17.366198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-12-06 01:15:17.366328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-12-06 01:15:17.366362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-12-06 01:15:17.366486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-12-06 01:15:17.366519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.884 [2024-12-06 01:15:17.366659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-12-06 01:15:17.366693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-12-06 01:15:17.366830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-12-06 01:15:17.366864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-12-06 01:15:17.367001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-12-06 01:15:17.367034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-12-06 01:15:17.367191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-12-06 01:15:17.367225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-12-06 01:15:17.367363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-12-06 01:15:17.367397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-12-06 01:15:17.367553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-12-06 01:15:17.367587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-12-06 01:15:17.367718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-12-06 01:15:17.367752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-12-06 01:15:17.367889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-12-06 01:15:17.367923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-12-06 01:15:17.368057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-12-06 01:15:17.368091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-12-06 01:15:17.368217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-12-06 01:15:17.368260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-12-06 01:15:17.368370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-12-06 01:15:17.368403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-12-06 01:15:17.368570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-12-06 01:15:17.368604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-12-06 01:15:17.368734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-12-06 01:15:17.368767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-12-06 01:15:17.368903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-12-06 01:15:17.368936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-12-06 01:15:17.369070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-12-06 01:15:17.369103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-12-06 01:15:17.369230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-12-06 01:15:17.369264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-12-06 01:15:17.369391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-12-06 01:15:17.369425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-12-06 01:15:17.369532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-12-06 01:15:17.369566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-12-06 01:15:17.369684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-12-06 01:15:17.369718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-12-06 01:15:17.369827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-12-06 01:15:17.369862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-12-06 01:15:17.370000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-12-06 01:15:17.370033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-12-06 01:15:17.370142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-12-06 01:15:17.370176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-12-06 01:15:17.370315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-12-06 01:15:17.370349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-12-06 01:15:17.370451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-12-06 01:15:17.370485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-12-06 01:15:17.370593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-12-06 01:15:17.370626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.887 [2024-12-06 01:15:17.370728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-12-06 01:15:17.370762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-12-06 01:15:17.370914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-12-06 01:15:17.370948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-12-06 01:15:17.371076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-12-06 01:15:17.371109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-12-06 01:15:17.371248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-12-06 01:15:17.371282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-12-06 01:15:17.371451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-12-06 01:15:17.371485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-12-06 01:15:17.371590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-12-06 01:15:17.371623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-12-06 01:15:17.371765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-12-06 01:15:17.371798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-12-06 01:15:17.371973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-12-06 01:15:17.372007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-12-06 01:15:17.372136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-12-06 01:15:17.372169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-12-06 01:15:17.372330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-12-06 01:15:17.372364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-12-06 01:15:17.372500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-12-06 01:15:17.372534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-12-06 01:15:17.372669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-12-06 01:15:17.372702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-12-06 01:15:17.372849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-12-06 01:15:17.372887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-12-06 01:15:17.373004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-12-06 01:15:17.373038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-12-06 01:15:17.373196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-12-06 01:15:17.373230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-12-06 01:15:17.373363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-12-06 01:15:17.373396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-12-06 01:15:17.373506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-12-06 01:15:17.373540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-12-06 01:15:17.373667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-12-06 01:15:17.373701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-12-06 01:15:17.373827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-12-06 01:15:17.373862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-12-06 01:15:17.373991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-12-06 01:15:17.374025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-12-06 01:15:17.374162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-12-06 01:15:17.374195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-12-06 01:15:17.374324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-12-06 01:15:17.374357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-12-06 01:15:17.374490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-12-06 01:15:17.374523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-12-06 01:15:17.374646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-12-06 01:15:17.374680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-12-06 01:15:17.374836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-12-06 01:15:17.374870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-12-06 01:15:17.375012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-12-06 01:15:17.375045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-12-06 01:15:17.375210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-12-06 01:15:17.375243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-12-06 01:15:17.375377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-12-06 01:15:17.375411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-12-06 01:15:17.375573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-12-06 01:15:17.375606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-12-06 01:15:17.375746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-12-06 01:15:17.375779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-12-06 01:15:17.375921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-12-06 01:15:17.375954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-12-06 01:15:17.376052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-12-06 01:15:17.376086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-12-06 01:15:17.376219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-12-06 01:15:17.376253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-12-06 01:15:17.376387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-12-06 01:15:17.376420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-12-06 01:15:17.376579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-12-06 01:15:17.376613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-12-06 01:15:17.376752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-12-06 01:15:17.376785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-12-06 01:15:17.376924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-12-06 01:15:17.376958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-12-06 01:15:17.377067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-12-06 01:15:17.377100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-12-06 01:15:17.377213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-12-06 01:15:17.377246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-12-06 01:15:17.377386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-12-06 01:15:17.377421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-12-06 01:15:17.377556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-12-06 01:15:17.377591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-12-06 01:15:17.377727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-12-06 01:15:17.377759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.892 [2024-12-06 01:15:17.377925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-12-06 01:15:17.377959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-12-06 01:15:17.378063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-12-06 01:15:17.378097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-12-06 01:15:17.378234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-12-06 01:15:17.378267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-12-06 01:15:17.378401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-12-06 01:15:17.378435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-12-06 01:15:17.378594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-12-06 01:15:17.378628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-12-06 01:15:17.378787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-12-06 01:15:17.378833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-12-06 01:15:17.378980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-12-06 01:15:17.379024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-12-06 01:15:17.379161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-12-06 01:15:17.379194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-12-06 01:15:17.379301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-12-06 01:15:17.379334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-12-06 01:15:17.379492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-12-06 01:15:17.379525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-12-06 01:15:17.379659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-12-06 01:15:17.379696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-12-06 01:15:17.379834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-12-06 01:15:17.379868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-12-06 01:15:17.380007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-12-06 01:15:17.380041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-12-06 01:15:17.380199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-12-06 01:15:17.380231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-12-06 01:15:17.380388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-12-06 01:15:17.380421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-12-06 01:15:17.380529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-12-06 01:15:17.380562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-12-06 01:15:17.380698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-12-06 01:15:17.380731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-12-06 01:15:17.380840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-12-06 01:15:17.380874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-12-06 01:15:17.380987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-12-06 01:15:17.381021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-12-06 01:15:17.381134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-12-06 01:15:17.381168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-12-06 01:15:17.381338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-12-06 01:15:17.381372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-12-06 01:15:17.381506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-12-06 01:15:17.381540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-12-06 01:15:17.381682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-12-06 01:15:17.381716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-12-06 01:15:17.381827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-12-06 01:15:17.381861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-12-06 01:15:17.381998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-12-06 01:15:17.382032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-12-06 01:15:17.382144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-12-06 01:15:17.382177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-12-06 01:15:17.382285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-12-06 01:15:17.382319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-12-06 01:15:17.382428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-12-06 01:15:17.382462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-12-06 01:15:17.382563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-12-06 01:15:17.382596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-12-06 01:15:17.382730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-12-06 01:15:17.382764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-12-06 01:15:17.382919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-12-06 01:15:17.382953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-12-06 01:15:17.383087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-12-06 01:15:17.383120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-12-06 01:15:17.383212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-12-06 01:15:17.383245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-12-06 01:15:17.383372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-12-06 01:15:17.383405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-12-06 01:15:17.383548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-12-06 01:15:17.383581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-12-06 01:15:17.383715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-12-06 01:15:17.383748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-12-06 01:15:17.383884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-12-06 01:15:17.383918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-12-06 01:15:17.384079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-12-06 01:15:17.384112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-12-06 01:15:17.384211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-12-06 01:15:17.384245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-12-06 01:15:17.384350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-12-06 01:15:17.384384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-12-06 01:15:17.384543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-12-06 01:15:17.384576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-12-06 01:15:17.384708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-12-06 01:15:17.384742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-12-06 01:15:17.384872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.384906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.385067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.385101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.385231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.385265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.385429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.385462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.385592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.385626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.385753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.385787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.385930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.385963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.386097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.386131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.386287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.386325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.386428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.386466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.386599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.386632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.386730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.386764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.386925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.386965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.387063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.387097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.387231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.387264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.387396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.387429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.387556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.387590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.387695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.387728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.387892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.387926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.388022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.388056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.388147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.388180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.388314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.388347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.388494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.388528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.388627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.388661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.388770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.388803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.388921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.388954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.389084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.389118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.389257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.389291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.389422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.389466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.389576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.389610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.389747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.389780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.389914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.389948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.390107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.390141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.390254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.390287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.390448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.390481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.390616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.390650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.390757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.390790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.390930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.390964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.391072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.391105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.391215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.391248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.391405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.391439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.391549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.391582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.391685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.391720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.391879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.391913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.392047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.392081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.392212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.392246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.392381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.392415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.392547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.392580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.392743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.392782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.392891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.392926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.393061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.393094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.393225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.393259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.393417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.896 [2024-12-06 01:15:17.393451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-12-06 01:15:17.393556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.393591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.393724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.393758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.393900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.393934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.394067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.394100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.394200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.394233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.394365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.394398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.394556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.394590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.394726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.394759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.394868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.394902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.395070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.395104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.395233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.395267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.395398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.395432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.395591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.395624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.395729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.395764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.395876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.395910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.396049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.396083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.396226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.396260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.396391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.396425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.396530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.396563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.396655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.396689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.396798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.396838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.396971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.397005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.397138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.397171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.397268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.397302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.397415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.397449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.397556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.397589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.397726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.397759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.397870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.397904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.398033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.398066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.398164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.398197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.398360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.398393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.398506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.398539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.398635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.398668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.398773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.398806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.398953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.398987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.399123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.399161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.399268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.399302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.399461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.399494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.399628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.399662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.399774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.399825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.399963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.399996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.400096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.400130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.400263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.400298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.400404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.400437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.400540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.400573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.400731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.400773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.400933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.400968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.401110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.401144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.401282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.401316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.401425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.401459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.401619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.401652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.401792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.401831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.401970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.402003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.402107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.402141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.402303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.402337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.897 [2024-12-06 01:15:17.402433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.897 [2024-12-06 01:15:17.402467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.897 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.402601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.402636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.402748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.402781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.402927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.402961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.403098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.403132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.403269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.403302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.403405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.403439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.403606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.403639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.403796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.403836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.403977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.404010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.404136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.404169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.404302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.404335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.404466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.404499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.404592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.404625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.404731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.404765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.404905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.404940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.405100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.405134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.405267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.405300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.405408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.405442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.405572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.405606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.405737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.405774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.405917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.405950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.406113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.406147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.406276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.406309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.406465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.406498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.406629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.406662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.406767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.406800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.406952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.406985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.407144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.407177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.407314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.407348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.407510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.407544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.407641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.407674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.407823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.407865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.407998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.408033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.408167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.408200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.408325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.408358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.408458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.408492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.408600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.408634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.408801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.408843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.408952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.408987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.409098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.409132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.409262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.409295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.409453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.409486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.409617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.409650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.409786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.409826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.409921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.409954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.410111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.410144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.410344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.410378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.410509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.410552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.410691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.410725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.410864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.410898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.411032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.411065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.411196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.411230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.411361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.411394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.411494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.898 [2024-12-06 01:15:17.411527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.898 qpair failed and we were unable to recover it. 00:37:44.898 [2024-12-06 01:15:17.411661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.411694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.411857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.411891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.412030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.412063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.412192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.412226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.412369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.412403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.412532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.412570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.412700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.412733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.412892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.412927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.413050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.413083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.413215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.413250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.413411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.413445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.413578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.413611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.413775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.413808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.413936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.413970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.414079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.414112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.414215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.414248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.414413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.414447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.414602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.414635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.414733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.414766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.414953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.414987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.415097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.415130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.415264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.415298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.415434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.415469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.415625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.415658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.415789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.415831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.415949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.415983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.416114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.416147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.416283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.416316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.416413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.416446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.416623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.416656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.416795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.416835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.416977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.417011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.417164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.417198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.417323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.417356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.417495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.417530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.417676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.417709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.417878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.417912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.418018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.418051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.418227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.418261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.418390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.418424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.418523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.418555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.899 [2024-12-06 01:15:17.418688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.899 [2024-12-06 01:15:17.418722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.899 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.418887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.418922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.419061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.419101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.419201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.419235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.419344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.419383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.419487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.419520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.419657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.419690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.419868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.419902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.420004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.420037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.420172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.420206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.420338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.420372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.420528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.420561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.420696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.420729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.420856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.420890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.421023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.421057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.421221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.421255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.421360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.421403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.421539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.421580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.421691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.421726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.421902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.421936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.422043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.422077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.422232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.422266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.422398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.422432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.422567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.422609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.422737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.422773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.422919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.422953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.423066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.423101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.423201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.423235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.423340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.423374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.423479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.423512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.423625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.423659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.423808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.423848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.423964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.423998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.424163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.424197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.424335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.424368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.424502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.424536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.424682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.424715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.424854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.424888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.425017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.425051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.425184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.425218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.425330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.425363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.425497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.425531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.425627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.425659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.425769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.425806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.425980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.426018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.426132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.426170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.426281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.426314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.426447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.426481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.426614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.426649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.426755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.426789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.426911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.426945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.427047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.427081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.900 [2024-12-06 01:15:17.427246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.900 [2024-12-06 01:15:17.427279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.900 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.427390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.427423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.427529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.427563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.427681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.427726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.427889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.427928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.428045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.428081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.428218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.428257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.428369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.428403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.428508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.428542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.428656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.428690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.428807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.428855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.428976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.429009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.429149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.429182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.429332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.429368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.429488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.429521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.429630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.429665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.429769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.429804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.429929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.429963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.430079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.430112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.430256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.430292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.430414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.430449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.430582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.430616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.430768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.430831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.430954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.430990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.431110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.431145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.431282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.431317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.431419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.431452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.431633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.431672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.431804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.431843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.431953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.431987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.432089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.432127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.432234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.432272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.432408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.432446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.432552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.432586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.432703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.432742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.432889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.432923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.433085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.433118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.433227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.433261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.433433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.433467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.433580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.433616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.433719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.433755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.433887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.433921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.434034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.434069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.434207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.434241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.434376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.434409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.434550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.434584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.434698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.434732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.434847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.434881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.434991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.435024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.435159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.435192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.435294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.435328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.435434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.435467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.435572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.435607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.435707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.435740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.435880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.901 [2024-12-06 01:15:17.435917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.901 qpair failed and we were unable to recover it. 00:37:44.901 [2024-12-06 01:15:17.436038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.436071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.436211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.436247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.436391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.436425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.436542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.436575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.436690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.436723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.436838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.436873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.436988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.437025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.437151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.437188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.437293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.437328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.437466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.437500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.437614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.437651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.437835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.437871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.438016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.438050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.438168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.438213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.438325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.438359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.438490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.438525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.438621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.438655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.438770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.438808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.438929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.438962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.439062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.439106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.439252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.439290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.439400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.439437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.439561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.439613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.439721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.439755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.439871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.439905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.440017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.440051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.440191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.440225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.440384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.440422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.440570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.440604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.440700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.440734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.440835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.440869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.440997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.441045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.441191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.441239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.441380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.441416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.441558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.441591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.441726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.441759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.441886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.441920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.442019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.442052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.442206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.442240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.442348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.442384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.442520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.442554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.442666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.442700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.442810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.442849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.442957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.442992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.902 qpair failed and we were unable to recover it. 00:37:44.902 [2024-12-06 01:15:17.443142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.902 [2024-12-06 01:15:17.443182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.443323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.443359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.443470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.443503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.443638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.443674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.443775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.443809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.443967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.444001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.444118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.444151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.444278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.444312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.444524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.444557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.444690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.444722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.444843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.444878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.444981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.445014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.445153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.445186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.445298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.445337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.445442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.445475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.445688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.445721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.445831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.445866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.445974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.446007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.446125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.446159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.446371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.446405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.446519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.446554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.446686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.446719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.446833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.446867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.446990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.447029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.447167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.447202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.447340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.447374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.447536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.447570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.447694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.447728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.447844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.447878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.447993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.448031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.448157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.448191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.448308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.448343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.448460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.448493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.448608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.448642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.448753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.448787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.448912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.448948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.449060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.449104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.449207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.449240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.449357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.449391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.449535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.449568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.449684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.449719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.449839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.449876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.450002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.450035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.450166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.450200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.450315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.450359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.450491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.450527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.450661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.450695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.450812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.450854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.450962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.450996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.451152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.451186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.451333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.451367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.451489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.451523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.451628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.451661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.451769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.451809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.451937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.451971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.452084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.452124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.452231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.452264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.903 qpair failed and we were unable to recover it. 00:37:44.903 [2024-12-06 01:15:17.452365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.903 [2024-12-06 01:15:17.452399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.452534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.452568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.452675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.452708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.452853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.452887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.452998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.453031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.453146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.453181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.453355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.453392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.453500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.453535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.453669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.453702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.453811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.453853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.453975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.454010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.454164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.454212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.454362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.454399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.454530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.454565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.454674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.454709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.454853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.454887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.454996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.455036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.455155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.455188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.455319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.455352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.455458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.455492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.455619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.455653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.455771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.455810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.455929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.455963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.456107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.456147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.456284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.456317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.456426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.456461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.456605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.456640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.456776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.456810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.456926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.456959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.457104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.457142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.457282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.457319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.457439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.457474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.457590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.457623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.457734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.457767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.457894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.457928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.458035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.458068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.458176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.458209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.458314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.458347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.458466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.458499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.458609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.458642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.458771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.458808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.458924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.458957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.459052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.459085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.459214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.459248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.459353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.459386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.459523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.459556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.459653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.459686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.459789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.459835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.459968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.460001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.460122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.460156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.460296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.460329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.460466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.460499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.460611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.460645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.460789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.460833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.460946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.460978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.461085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.904 [2024-12-06 01:15:17.461123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.904 qpair failed and we were unable to recover it. 00:37:44.904 [2024-12-06 01:15:17.461224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.461257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.461358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.461391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.461502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.461536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.461637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.461670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.461808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.461855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.461968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.462002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.462112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.462145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.462252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.462294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.462410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.462446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.462555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.462588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.462752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.462786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.462933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.462967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.463074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.463108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.463246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.463279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.463390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.463424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.463531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.463565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.463665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.463698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.463845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.463879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.463988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.464021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.464144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.464176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.464279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.464312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.464431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.464465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.464622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.464656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.464769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.464806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.464931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.464965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.465075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.465108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.465227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.465262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.465410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.465443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.465561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.465595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.465728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.465762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.465913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.465961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.466075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.466112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.466248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.466282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.466418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.466451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.466575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.466608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.466734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.466768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.466886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.466920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.905 [2024-12-06 01:15:17.467026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.905 [2024-12-06 01:15:17.467060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.905 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.467248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.467283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.467419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.467452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.467556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.467590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.467704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.467737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.467885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.467922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.468062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.468100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.468214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.468248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.468360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.468393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.468530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.468565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.468687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.468725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.468834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.468868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.468976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.469010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.469127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.469160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.469299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.469333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.469444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.469477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.469590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.469623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.469739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.469774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.469924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.469971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.470084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.470120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.470223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.470258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.470384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.470418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.470531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.470566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.470705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.470738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.470846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.470881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.471021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.471056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.471160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.471215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.471397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.471436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.471556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.471607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.471742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.471776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.471894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.471928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.472030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.472063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.472178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.472212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.472341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.472375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.472517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.472551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.472711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.472745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.472850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.472885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.473018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.473065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.473195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.473245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.473398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.473435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.473575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.473609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.906 qpair failed and we were unable to recover it. 00:37:44.906 [2024-12-06 01:15:17.473714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.906 [2024-12-06 01:15:17.473748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.473885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.473920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.474032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.474069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.474180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.474213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.474339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.474373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.474591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.474626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.474750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.474783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.474901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.474936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.475054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.475088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.475235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.475275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.475408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.475441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.475579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.475612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.475753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.475795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.475910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.475944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.476053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.476086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.476194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.476228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.476373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.476410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.476577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.476611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.476738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.476771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.476888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.476922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.477064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.477097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.477236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.477270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.477382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.477418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.477560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.477594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.477769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.477802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.477945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.477993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.478104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.478143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.478249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.478282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.478515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.478549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.478712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.478749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.478868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.478902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.479016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.479050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.479151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.479189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.479330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.479370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.479502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.479536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.479693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.479727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.479853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.479888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.480022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.480069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.480212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.480248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.480384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.480419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.480530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.480564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.480701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.480735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.907 qpair failed and we were unable to recover it. 00:37:44.907 [2024-12-06 01:15:17.480848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.907 [2024-12-06 01:15:17.480883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.480985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.481018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.481146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.481179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.481316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.481351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.481452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.481485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.481621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.481655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.481813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.481852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.481955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.481994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.482104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.482137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.482253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.482286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.482393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.482427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.482530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.482564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.482674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.482707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.482807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.482846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.482951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.482984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.483101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.483134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.483283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.483318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.483442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.483476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.483580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.483613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.483751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.483785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.483897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.483931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.484034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.484067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.484203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.484237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.484339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.484372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.484497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.484530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.484667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.484701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.484805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.484846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.484962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.484995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.485093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.485126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.485234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.485268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.485384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.485417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.485582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.485619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.485734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.485768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.485894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.485929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.486078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.486113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.486224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.486257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.486373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.486407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.908 qpair failed and we were unable to recover it. 00:37:44.908 [2024-12-06 01:15:17.486538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.908 [2024-12-06 01:15:17.486576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.486713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.486747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.486872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.486907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.487044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.487079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.487220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.487254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.487398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.487432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.487575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.487611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.487748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.487781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.487896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.487931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.488085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.488133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.488277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.488320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.488432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.488467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.488570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.488604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.488744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.488779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.488923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.488958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.489133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.489169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.489284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.489319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.489460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.489494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.489632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.489665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.489799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.489842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.489947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.489980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.490079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.490112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.490215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.490248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.490387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.490420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.490557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.490591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.490710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.490749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.490891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.490940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.491087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.491122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.491224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.491258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.491395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.491429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.491540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.491573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.491681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.491716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.491835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.491870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.491991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.492025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.492170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.909 [2024-12-06 01:15:17.492206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.909 qpair failed and we were unable to recover it. 00:37:44.909 [2024-12-06 01:15:17.492349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.910 [2024-12-06 01:15:17.492383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.910 qpair failed and we were unable to recover it. 00:37:44.910 [2024-12-06 01:15:17.492547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.910 [2024-12-06 01:15:17.492582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.910 qpair failed and we were unable to recover it. 00:37:44.910 [2024-12-06 01:15:17.492722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.910 [2024-12-06 01:15:17.492770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.910 qpair failed and we were unable to recover it. 00:37:44.910 [2024-12-06 01:15:17.492905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.910 [2024-12-06 01:15:17.492940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.910 qpair failed and we were unable to recover it. 00:37:44.910 [2024-12-06 01:15:17.493072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.910 [2024-12-06 01:15:17.493106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.910 qpair failed and we were unable to recover it. 00:37:44.910 [2024-12-06 01:15:17.493201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.910 [2024-12-06 01:15:17.493234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.910 qpair failed and we were unable to recover it. 00:37:44.910 [2024-12-06 01:15:17.493372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.910 [2024-12-06 01:15:17.493406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.910 qpair failed and we were unable to recover it. 00:37:44.910 [2024-12-06 01:15:17.493540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.910 [2024-12-06 01:15:17.493574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.910 qpair failed and we were unable to recover it. 00:37:44.910 [2024-12-06 01:15:17.493682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.910 [2024-12-06 01:15:17.493727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.910 qpair failed and we were unable to recover it. 00:37:44.910 [2024-12-06 01:15:17.493843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.910 [2024-12-06 01:15:17.493877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.910 qpair failed and we were unable to recover it. 00:37:44.910 [2024-12-06 01:15:17.493987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.910 [2024-12-06 01:15:17.494021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.910 qpair failed and we were unable to recover it. 00:37:44.910 [2024-12-06 01:15:17.494122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.911 [2024-12-06 01:15:17.494155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.911 qpair failed and we were unable to recover it. 00:37:44.911 [2024-12-06 01:15:17.494261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.912 [2024-12-06 01:15:17.494294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.912 qpair failed and we were unable to recover it. 00:37:44.912 [2024-12-06 01:15:17.494399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.912 [2024-12-06 01:15:17.494432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.912 qpair failed and we were unable to recover it. 00:37:44.912 [2024-12-06 01:15:17.494561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.912 [2024-12-06 01:15:17.494594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.912 qpair failed and we were unable to recover it. 00:37:44.912 [2024-12-06 01:15:17.494701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.912 [2024-12-06 01:15:17.494739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.912 qpair failed and we were unable to recover it. 00:37:44.912 [2024-12-06 01:15:17.494861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.912 [2024-12-06 01:15:17.494894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.912 qpair failed and we were unable to recover it. 00:37:44.912 [2024-12-06 01:15:17.495004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.913 [2024-12-06 01:15:17.495037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.913 qpair failed and we were unable to recover it. 00:37:44.913 [2024-12-06 01:15:17.495154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.913 [2024-12-06 01:15:17.495190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.913 qpair failed and we were unable to recover it. 00:37:44.913 [2024-12-06 01:15:17.495326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.913 [2024-12-06 01:15:17.495360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.913 qpair failed and we were unable to recover it. 00:37:44.913 [2024-12-06 01:15:17.495475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.913 [2024-12-06 01:15:17.495514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.913 qpair failed and we were unable to recover it. 00:37:44.913 [2024-12-06 01:15:17.495646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.913 [2024-12-06 01:15:17.495680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.913 qpair failed and we were unable to recover it. 00:37:44.913 [2024-12-06 01:15:17.495834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.913 [2024-12-06 01:15:17.495870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.913 qpair failed and we were unable to recover it. 00:37:44.913 [2024-12-06 01:15:17.496032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.913 [2024-12-06 01:15:17.496079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.913 qpair failed and we were unable to recover it. 00:37:44.913 [2024-12-06 01:15:17.496196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.914 [2024-12-06 01:15:17.496232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.914 qpair failed and we were unable to recover it. 00:37:44.914 [2024-12-06 01:15:17.496375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.914 [2024-12-06 01:15:17.496410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.914 qpair failed and we were unable to recover it. 00:37:44.914 [2024-12-06 01:15:17.496523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.914 [2024-12-06 01:15:17.496559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.914 qpair failed and we were unable to recover it. 00:37:44.914 [2024-12-06 01:15:17.496666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.914 [2024-12-06 01:15:17.496699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.914 qpair failed and we were unable to recover it. 00:37:44.914 [2024-12-06 01:15:17.496836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.914 [2024-12-06 01:15:17.496870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.914 qpair failed and we were unable to recover it. 00:37:44.914 [2024-12-06 01:15:17.496982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.914 [2024-12-06 01:15:17.497015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.914 qpair failed and we were unable to recover it. 00:37:44.914 [2024-12-06 01:15:17.497148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.914 [2024-12-06 01:15:17.497181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.914 qpair failed and we were unable to recover it. 00:37:44.914 [2024-12-06 01:15:17.497313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.914 [2024-12-06 01:15:17.497347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.915 qpair failed and we were unable to recover it. 00:37:44.915 [2024-12-06 01:15:17.497458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.915 [2024-12-06 01:15:17.497492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.915 qpair failed and we were unable to recover it. 00:37:44.917 [2024-12-06 01:15:17.497603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.917 [2024-12-06 01:15:17.497636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.917 qpair failed and we were unable to recover it. 00:37:44.917 [2024-12-06 01:15:17.497740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.917 [2024-12-06 01:15:17.497773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.917 qpair failed and we were unable to recover it. 00:37:44.917 [2024-12-06 01:15:17.497884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.917 [2024-12-06 01:15:17.497918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.917 qpair failed and we were unable to recover it. 00:37:44.917 [2024-12-06 01:15:17.498034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.917 [2024-12-06 01:15:17.498068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.917 qpair failed and we were unable to recover it. 00:37:44.917 [2024-12-06 01:15:17.498174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.917 [2024-12-06 01:15:17.498207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.917 qpair failed and we were unable to recover it. 00:37:44.917 [2024-12-06 01:15:17.498342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.917 [2024-12-06 01:15:17.498375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.917 qpair failed and we were unable to recover it. 00:37:44.917 [2024-12-06 01:15:17.498476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.918 [2024-12-06 01:15:17.498509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.918 qpair failed and we were unable to recover it. 00:37:44.918 [2024-12-06 01:15:17.498646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.918 [2024-12-06 01:15:17.498679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.918 qpair failed and we were unable to recover it. 00:37:44.918 [2024-12-06 01:15:17.498788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.918 [2024-12-06 01:15:17.498832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.918 qpair failed and we were unable to recover it. 00:37:44.918 [2024-12-06 01:15:17.498957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.918 [2024-12-06 01:15:17.499005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.918 qpair failed and we were unable to recover it. 00:37:44.918 [2024-12-06 01:15:17.499128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.918 [2024-12-06 01:15:17.499165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.918 qpair failed and we were unable to recover it. 00:37:44.918 [2024-12-06 01:15:17.499301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.918 [2024-12-06 01:15:17.499335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.918 qpair failed and we were unable to recover it. 00:37:44.918 [2024-12-06 01:15:17.499448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.918 [2024-12-06 01:15:17.499482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.918 qpair failed and we were unable to recover it. 00:37:44.918 [2024-12-06 01:15:17.499589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.918 [2024-12-06 01:15:17.499623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.918 qpair failed and we were unable to recover it. 00:37:44.918 [2024-12-06 01:15:17.499727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.918 [2024-12-06 01:15:17.499766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.918 qpair failed and we were unable to recover it. 00:37:44.918 [2024-12-06 01:15:17.499910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.918 [2024-12-06 01:15:17.499945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.918 qpair failed and we were unable to recover it. 00:37:44.918 [2024-12-06 01:15:17.500049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.918 [2024-12-06 01:15:17.500083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.918 qpair failed and we were unable to recover it. 00:37:44.918 [2024-12-06 01:15:17.500239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.918 [2024-12-06 01:15:17.500272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.918 qpair failed and we were unable to recover it. 00:37:44.919 [2024-12-06 01:15:17.500404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.919 [2024-12-06 01:15:17.500443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.919 qpair failed and we were unable to recover it. 00:37:44.919 [2024-12-06 01:15:17.500551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.919 [2024-12-06 01:15:17.500586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.919 qpair failed and we were unable to recover it. 00:37:44.919 [2024-12-06 01:15:17.500747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.919 [2024-12-06 01:15:17.500781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.919 qpair failed and we were unable to recover it. 00:37:44.919 [2024-12-06 01:15:17.500906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.919 [2024-12-06 01:15:17.500942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.919 qpair failed and we were unable to recover it. 00:37:44.919 [2024-12-06 01:15:17.501049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.919 [2024-12-06 01:15:17.501089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.919 qpair failed and we were unable to recover it. 00:37:44.919 [2024-12-06 01:15:17.501219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.919 [2024-12-06 01:15:17.501252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.919 qpair failed and we were unable to recover it. 00:37:44.919 [2024-12-06 01:15:17.501354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.919 [2024-12-06 01:15:17.501387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.919 qpair failed and we were unable to recover it. 00:37:44.919 [2024-12-06 01:15:17.501497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.919 [2024-12-06 01:15:17.501530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.919 qpair failed and we were unable to recover it. 00:37:44.919 [2024-12-06 01:15:17.501633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.919 [2024-12-06 01:15:17.501666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.919 qpair failed and we were unable to recover it. 00:37:44.919 [2024-12-06 01:15:17.501768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.919 [2024-12-06 01:15:17.501811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.919 qpair failed and we were unable to recover it. 00:37:44.919 [2024-12-06 01:15:17.501919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.919 [2024-12-06 01:15:17.501953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.919 qpair failed and we were unable to recover it. 00:37:44.920 [2024-12-06 01:15:17.502178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.920 [2024-12-06 01:15:17.502213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.920 qpair failed and we were unable to recover it. 00:37:44.920 [2024-12-06 01:15:17.502357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.920 [2024-12-06 01:15:17.502391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.920 qpair failed and we were unable to recover it. 00:37:44.920 [2024-12-06 01:15:17.502522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.920 [2024-12-06 01:15:17.502556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.920 qpair failed and we were unable to recover it. 00:37:44.920 [2024-12-06 01:15:17.502659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.920 [2024-12-06 01:15:17.502693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.920 qpair failed and we were unable to recover it. 00:37:44.920 [2024-12-06 01:15:17.502833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.920 [2024-12-06 01:15:17.502871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.920 qpair failed and we were unable to recover it. 00:37:44.920 [2024-12-06 01:15:17.503047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.920 [2024-12-06 01:15:17.503095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.920 qpair failed and we were unable to recover it. 00:37:44.920 [2024-12-06 01:15:17.503204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.920 [2024-12-06 01:15:17.503241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.920 qpair failed and we were unable to recover it. 00:37:44.920 [2024-12-06 01:15:17.503384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.920 [2024-12-06 01:15:17.503418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.920 qpair failed and we were unable to recover it. 00:37:44.920 [2024-12-06 01:15:17.503567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.920 [2024-12-06 01:15:17.503602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.920 qpair failed and we were unable to recover it. 00:37:44.920 [2024-12-06 01:15:17.503762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.920 [2024-12-06 01:15:17.503796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.920 qpair failed and we were unable to recover it. 00:37:44.920 [2024-12-06 01:15:17.503909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.920 [2024-12-06 01:15:17.503942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.920 qpair failed and we were unable to recover it. 00:37:44.920 [2024-12-06 01:15:17.504048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.921 [2024-12-06 01:15:17.504082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.921 qpair failed and we were unable to recover it. 00:37:44.921 [2024-12-06 01:15:17.504188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.921 [2024-12-06 01:15:17.504222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.921 qpair failed and we were unable to recover it. 00:37:44.921 [2024-12-06 01:15:17.504339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.921 [2024-12-06 01:15:17.504373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.921 qpair failed and we were unable to recover it. 00:37:44.921 [2024-12-06 01:15:17.504514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.921 [2024-12-06 01:15:17.504547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.921 qpair failed and we were unable to recover it. 00:37:44.921 [2024-12-06 01:15:17.504656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.921 [2024-12-06 01:15:17.504690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.921 qpair failed and we were unable to recover it. 00:37:44.921 [2024-12-06 01:15:17.504786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.921 [2024-12-06 01:15:17.504825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.921 qpair failed and we were unable to recover it. 00:37:44.921 [2024-12-06 01:15:17.504993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.921 [2024-12-06 01:15:17.505027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.921 qpair failed and we were unable to recover it. 00:37:44.921 [2024-12-06 01:15:17.505129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.921 [2024-12-06 01:15:17.505162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.921 qpair failed and we were unable to recover it. 00:37:44.922 [2024-12-06 01:15:17.505338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.922 [2024-12-06 01:15:17.505371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.922 qpair failed and we were unable to recover it. 00:37:44.922 [2024-12-06 01:15:17.505497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.922 [2024-12-06 01:15:17.505530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.922 qpair failed and we were unable to recover it. 00:37:44.922 [2024-12-06 01:15:17.505662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.922 [2024-12-06 01:15:17.505695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.922 qpair failed and we were unable to recover it. 00:37:44.922 [2024-12-06 01:15:17.505802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.922 [2024-12-06 01:15:17.505847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.922 qpair failed and we were unable to recover it. 00:37:44.922 [2024-12-06 01:15:17.505962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.922 [2024-12-06 01:15:17.505995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.922 qpair failed and we were unable to recover it. 00:37:44.922 [2024-12-06 01:15:17.506122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.922 [2024-12-06 01:15:17.506170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.922 qpair failed and we were unable to recover it. 00:37:44.922 [2024-12-06 01:15:17.506322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.922 [2024-12-06 01:15:17.506358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.922 qpair failed and we were unable to recover it. 00:37:44.922 [2024-12-06 01:15:17.506464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.922 [2024-12-06 01:15:17.506498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.922 qpair failed and we were unable to recover it. 00:37:44.922 [2024-12-06 01:15:17.506711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.922 [2024-12-06 01:15:17.506745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.922 qpair failed and we were unable to recover it. 00:37:44.922 [2024-12-06 01:15:17.506868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.922 [2024-12-06 01:15:17.506903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.922 qpair failed and we were unable to recover it. 00:37:44.923 [2024-12-06 01:15:17.507040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.923 [2024-12-06 01:15:17.507075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.923 qpair failed and we were unable to recover it. 00:37:44.923 [2024-12-06 01:15:17.507192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.923 [2024-12-06 01:15:17.507230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.923 qpair failed and we were unable to recover it. 00:37:44.923 [2024-12-06 01:15:17.507351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.923 [2024-12-06 01:15:17.507384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.923 qpair failed and we were unable to recover it. 00:37:44.923 [2024-12-06 01:15:17.507556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.923 [2024-12-06 01:15:17.507590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.923 qpair failed and we were unable to recover it. 00:37:44.923 [2024-12-06 01:15:17.507697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.923 [2024-12-06 01:15:17.507735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.923 qpair failed and we were unable to recover it. 00:37:44.923 [2024-12-06 01:15:17.507877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.923 [2024-12-06 01:15:17.507912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.923 qpair failed and we were unable to recover it. 00:37:44.923 [2024-12-06 01:15:17.508033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.923 [2024-12-06 01:15:17.508067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.923 qpair failed and we were unable to recover it. 00:37:44.924 [2024-12-06 01:15:17.508205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.924 [2024-12-06 01:15:17.508241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.924 qpair failed and we were unable to recover it. 00:37:44.924 [2024-12-06 01:15:17.508373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.924 [2024-12-06 01:15:17.508407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.924 qpair failed and we were unable to recover it. 00:37:44.924 [2024-12-06 01:15:17.508516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.924 [2024-12-06 01:15:17.508549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.924 qpair failed and we were unable to recover it. 00:37:44.924 [2024-12-06 01:15:17.508689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.924 [2024-12-06 01:15:17.508722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.924 qpair failed and we were unable to recover it. 00:37:44.924 [2024-12-06 01:15:17.508835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.924 [2024-12-06 01:15:17.508870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.924 qpair failed and we were unable to recover it. 00:37:44.924 [2024-12-06 01:15:17.508977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.924 [2024-12-06 01:15:17.509010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.924 qpair failed and we were unable to recover it. 00:37:44.924 [2024-12-06 01:15:17.509152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.924 [2024-12-06 01:15:17.509185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.924 qpair failed and we were unable to recover it. 00:37:44.924 [2024-12-06 01:15:17.509287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.924 [2024-12-06 01:15:17.509321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.924 qpair failed and we were unable to recover it. 00:37:44.924 [2024-12-06 01:15:17.509427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.924 [2024-12-06 01:15:17.509460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.924 qpair failed and we were unable to recover it. 00:37:44.924 [2024-12-06 01:15:17.509559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.924 [2024-12-06 01:15:17.509592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.925 qpair failed and we were unable to recover it. 00:37:44.925 [2024-12-06 01:15:17.509697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.925 [2024-12-06 01:15:17.509730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.925 qpair failed and we were unable to recover it. 00:37:44.925 [2024-12-06 01:15:17.509843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.925 [2024-12-06 01:15:17.509877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.925 qpair failed and we were unable to recover it. 00:37:44.925 [2024-12-06 01:15:17.510052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.925 [2024-12-06 01:15:17.510088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.925 qpair failed and we were unable to recover it. 00:37:44.925 [2024-12-06 01:15:17.510225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.925 [2024-12-06 01:15:17.510259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.925 qpair failed and we were unable to recover it. 00:37:44.925 [2024-12-06 01:15:17.510371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.925 [2024-12-06 01:15:17.510408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.925 qpair failed and we were unable to recover it. 00:37:44.925 [2024-12-06 01:15:17.510528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.925 [2024-12-06 01:15:17.510562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.925 qpair failed and we were unable to recover it. 00:37:44.925 [2024-12-06 01:15:17.510678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.925 [2024-12-06 01:15:17.510713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.925 qpair failed and we were unable to recover it. 00:37:44.925 [2024-12-06 01:15:17.510828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.926 [2024-12-06 01:15:17.510863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.926 qpair failed and we were unable to recover it. 00:37:44.926 [2024-12-06 01:15:17.510968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.926 [2024-12-06 01:15:17.511001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.926 qpair failed and we were unable to recover it. 00:37:44.926 [2024-12-06 01:15:17.511105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.926 [2024-12-06 01:15:17.511138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.926 qpair failed and we were unable to recover it. 00:37:44.926 [2024-12-06 01:15:17.511268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.926 [2024-12-06 01:15:17.511301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.926 qpair failed and we were unable to recover it. 00:37:44.926 [2024-12-06 01:15:17.511407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.926 [2024-12-06 01:15:17.511440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.926 qpair failed and we were unable to recover it. 00:37:44.926 [2024-12-06 01:15:17.511553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.926 [2024-12-06 01:15:17.511586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.926 qpair failed and we were unable to recover it. 00:37:44.926 [2024-12-06 01:15:17.511697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.926 [2024-12-06 01:15:17.511730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.926 qpair failed and we were unable to recover it. 00:37:44.926 [2024-12-06 01:15:17.511874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.926 [2024-12-06 01:15:17.511910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.926 qpair failed and we were unable to recover it. 00:37:44.926 [2024-12-06 01:15:17.512020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.926 [2024-12-06 01:15:17.512064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.926 qpair failed and we were unable to recover it. 00:37:44.926 [2024-12-06 01:15:17.512209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.926 [2024-12-06 01:15:17.512242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.926 qpair failed and we were unable to recover it. 00:37:44.926 [2024-12-06 01:15:17.512345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.926 [2024-12-06 01:15:17.512380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.927 qpair failed and we were unable to recover it. 00:37:44.927 [2024-12-06 01:15:17.512517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.927 [2024-12-06 01:15:17.512551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.927 qpair failed and we were unable to recover it. 00:37:44.927 [2024-12-06 01:15:17.512686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.927 [2024-12-06 01:15:17.512719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.927 qpair failed and we were unable to recover it. 00:37:44.927 [2024-12-06 01:15:17.512830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.927 [2024-12-06 01:15:17.512865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.927 qpair failed and we were unable to recover it. 00:37:44.927 [2024-12-06 01:15:17.512987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.927 [2024-12-06 01:15:17.513020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.927 qpair failed and we were unable to recover it. 00:37:44.927 [2024-12-06 01:15:17.513129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.927 [2024-12-06 01:15:17.513164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.927 qpair failed and we were unable to recover it. 00:37:44.927 [2024-12-06 01:15:17.513296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.927 [2024-12-06 01:15:17.513330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.927 qpair failed and we were unable to recover it. 00:37:44.928 [2024-12-06 01:15:17.513464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.928 [2024-12-06 01:15:17.513498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.928 qpair failed and we were unable to recover it. 00:37:44.928 [2024-12-06 01:15:17.513606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.928 [2024-12-06 01:15:17.513642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.928 qpair failed and we were unable to recover it. 00:37:44.928 [2024-12-06 01:15:17.513776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.928 [2024-12-06 01:15:17.513810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.928 qpair failed and we were unable to recover it. 00:37:44.928 [2024-12-06 01:15:17.513939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.928 [2024-12-06 01:15:17.513992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.928 qpair failed and we were unable to recover it. 00:37:44.928 [2024-12-06 01:15:17.514161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.928 [2024-12-06 01:15:17.514198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.928 qpair failed and we were unable to recover it. 00:37:44.928 [2024-12-06 01:15:17.514360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.928 [2024-12-06 01:15:17.514394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.928 qpair failed and we were unable to recover it. 00:37:44.928 [2024-12-06 01:15:17.514531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.928 [2024-12-06 01:15:17.514566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.928 qpair failed and we were unable to recover it. 00:37:44.928 [2024-12-06 01:15:17.514681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.928 [2024-12-06 01:15:17.514715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.928 qpair failed and we were unable to recover it. 00:37:44.928 [2024-12-06 01:15:17.514827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.928 [2024-12-06 01:15:17.514861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.928 qpair failed and we were unable to recover it. 00:37:44.929 [2024-12-06 01:15:17.514967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.929 [2024-12-06 01:15:17.515002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.929 qpair failed and we were unable to recover it. 00:37:44.929 [2024-12-06 01:15:17.515120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.929 [2024-12-06 01:15:17.515153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.929 qpair failed and we were unable to recover it. 00:37:44.929 [2024-12-06 01:15:17.515270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.929 [2024-12-06 01:15:17.515303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.929 qpair failed and we were unable to recover it. 00:37:44.929 [2024-12-06 01:15:17.515441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.929 [2024-12-06 01:15:17.515477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.929 qpair failed and we were unable to recover it. 00:37:44.929 [2024-12-06 01:15:17.515589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.929 [2024-12-06 01:15:17.515624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.929 qpair failed and we were unable to recover it. 00:37:44.929 [2024-12-06 01:15:17.515739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.929 [2024-12-06 01:15:17.515775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.929 qpair failed and we were unable to recover it. 00:37:44.929 [2024-12-06 01:15:17.515916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.929 [2024-12-06 01:15:17.515952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.929 qpair failed and we were unable to recover it. 00:37:44.929 [2024-12-06 01:15:17.516055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.929 [2024-12-06 01:15:17.516088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.929 qpair failed and we were unable to recover it. 00:37:44.929 [2024-12-06 01:15:17.516210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.929 [2024-12-06 01:15:17.516244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.929 qpair failed and we were unable to recover it. 00:37:44.929 [2024-12-06 01:15:17.516375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.929 [2024-12-06 01:15:17.516408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.929 qpair failed and we were unable to recover it. 00:37:44.929 [2024-12-06 01:15:17.516518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.929 [2024-12-06 01:15:17.516551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.929 qpair failed and we were unable to recover it. 00:37:44.929 [2024-12-06 01:15:17.516687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.929 [2024-12-06 01:15:17.516721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.929 qpair failed and we were unable to recover it. 00:37:44.930 [2024-12-06 01:15:17.516837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.930 [2024-12-06 01:15:17.516872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.930 qpair failed and we were unable to recover it. 00:37:44.930 [2024-12-06 01:15:17.517033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.930 [2024-12-06 01:15:17.517068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.930 qpair failed and we were unable to recover it. 00:37:44.930 [2024-12-06 01:15:17.517203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.930 [2024-12-06 01:15:17.517236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.930 qpair failed and we were unable to recover it. 00:37:44.930 [2024-12-06 01:15:17.517339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.930 [2024-12-06 01:15:17.517376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.930 qpair failed and we were unable to recover it. 00:37:44.930 [2024-12-06 01:15:17.517514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.930 [2024-12-06 01:15:17.517546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.930 qpair failed and we were unable to recover it. 00:37:44.930 [2024-12-06 01:15:17.517665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.930 [2024-12-06 01:15:17.517701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.930 qpair failed and we were unable to recover it. 00:37:44.930 [2024-12-06 01:15:17.517854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.930 [2024-12-06 01:15:17.517903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.930 qpair failed and we were unable to recover it. 00:37:44.930 [2024-12-06 01:15:17.518025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.930 [2024-12-06 01:15:17.518061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.930 qpair failed and we were unable to recover it. 00:37:44.930 [2024-12-06 01:15:17.518182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.930 [2024-12-06 01:15:17.518217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.930 qpair failed and we were unable to recover it. 00:37:44.930 [2024-12-06 01:15:17.518327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.930 [2024-12-06 01:15:17.518362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.930 qpair failed and we were unable to recover it. 00:37:44.930 [2024-12-06 01:15:17.518520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.930 [2024-12-06 01:15:17.518554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.930 qpair failed and we were unable to recover it. 00:37:44.930 [2024-12-06 01:15:17.518692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.930 [2024-12-06 01:15:17.518725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.931 qpair failed and we were unable to recover it. 00:37:44.931 [2024-12-06 01:15:17.518831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.931 [2024-12-06 01:15:17.518865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.931 qpair failed and we were unable to recover it. 00:37:44.931 [2024-12-06 01:15:17.518973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.931 [2024-12-06 01:15:17.519006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.931 qpair failed and we were unable to recover it. 00:37:44.931 [2024-12-06 01:15:17.519137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.931 [2024-12-06 01:15:17.519171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.931 qpair failed and we were unable to recover it. 00:37:44.931 [2024-12-06 01:15:17.519283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.931 [2024-12-06 01:15:17.519316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.931 qpair failed and we were unable to recover it. 00:37:44.931 [2024-12-06 01:15:17.519453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.931 [2024-12-06 01:15:17.519487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.931 qpair failed and we were unable to recover it. 00:37:44.931 [2024-12-06 01:15:17.519597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.931 [2024-12-06 01:15:17.519631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.931 qpair failed and we were unable to recover it. 00:37:44.931 [2024-12-06 01:15:17.519758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.931 [2024-12-06 01:15:17.519792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.931 qpair failed and we were unable to recover it. 00:37:44.931 [2024-12-06 01:15:17.519939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.931 [2024-12-06 01:15:17.519972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.931 qpair failed and we were unable to recover it. 00:37:44.931 [2024-12-06 01:15:17.520076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.931 [2024-12-06 01:15:17.520110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.932 qpair failed and we were unable to recover it. 00:37:44.932 [2024-12-06 01:15:17.520275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.932 [2024-12-06 01:15:17.520308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.932 qpair failed and we were unable to recover it. 00:37:44.932 [2024-12-06 01:15:17.520444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.932 [2024-12-06 01:15:17.520483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.932 qpair failed and we were unable to recover it. 00:37:44.932 [2024-12-06 01:15:17.520588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.932 [2024-12-06 01:15:17.520621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.932 qpair failed and we were unable to recover it. 00:37:44.932 [2024-12-06 01:15:17.520746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.932 [2024-12-06 01:15:17.520779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.932 qpair failed and we were unable to recover it. 00:37:44.932 [2024-12-06 01:15:17.520914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.932 [2024-12-06 01:15:17.520963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.932 qpair failed and we were unable to recover it. 00:37:44.932 [2024-12-06 01:15:17.521080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.932 [2024-12-06 01:15:17.521116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.932 qpair failed and we were unable to recover it. 00:37:44.932 [2024-12-06 01:15:17.521251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.933 [2024-12-06 01:15:17.521286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.933 qpair failed and we were unable to recover it. 00:37:44.934 [2024-12-06 01:15:17.521396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.934 [2024-12-06 01:15:17.521430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.934 qpair failed and we were unable to recover it. 00:37:44.934 [2024-12-06 01:15:17.521545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.934 [2024-12-06 01:15:17.521580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.934 qpair failed and we were unable to recover it. 00:37:44.934 [2024-12-06 01:15:17.521690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.934 [2024-12-06 01:15:17.521724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.934 qpair failed and we were unable to recover it. 00:37:44.934 [2024-12-06 01:15:17.521835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.934 [2024-12-06 01:15:17.521870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.934 qpair failed and we were unable to recover it. 00:37:44.934 [2024-12-06 01:15:17.521976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.935 [2024-12-06 01:15:17.522011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.935 qpair failed and we were unable to recover it. 00:37:44.935 [2024-12-06 01:15:17.522113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.935 [2024-12-06 01:15:17.522148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.935 qpair failed and we were unable to recover it. 00:37:44.935 [2024-12-06 01:15:17.522264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.935 [2024-12-06 01:15:17.522298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.935 qpair failed and we were unable to recover it. 00:37:44.935 [2024-12-06 01:15:17.522408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.935 [2024-12-06 01:15:17.522442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.935 qpair failed and we were unable to recover it. 00:37:44.935 [2024-12-06 01:15:17.522590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.935 [2024-12-06 01:15:17.522624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.935 qpair failed and we were unable to recover it. 00:37:44.935 [2024-12-06 01:15:17.522725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.935 [2024-12-06 01:15:17.522757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.935 qpair failed and we were unable to recover it. 00:37:44.935 [2024-12-06 01:15:17.522880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.935 [2024-12-06 01:15:17.522917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.935 qpair failed and we were unable to recover it. 00:37:44.935 [2024-12-06 01:15:17.523024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.935 [2024-12-06 01:15:17.523062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.935 qpair failed and we were unable to recover it. 00:37:44.935 [2024-12-06 01:15:17.523171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.935 [2024-12-06 01:15:17.523204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.935 qpair failed and we were unable to recover it. 00:37:44.935 [2024-12-06 01:15:17.523315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.935 [2024-12-06 01:15:17.523348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.935 qpair failed and we were unable to recover it. 00:37:44.935 [2024-12-06 01:15:17.523459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.935 [2024-12-06 01:15:17.523492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.935 qpair failed and we were unable to recover it. 00:37:44.935 [2024-12-06 01:15:17.523621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.935 [2024-12-06 01:15:17.523655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.935 qpair failed and we were unable to recover it. 00:37:44.935 [2024-12-06 01:15:17.523775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.935 [2024-12-06 01:15:17.523809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.935 qpair failed and we were unable to recover it. 00:37:44.935 [2024-12-06 01:15:17.523938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.935 [2024-12-06 01:15:17.523974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.935 qpair failed and we were unable to recover it. 00:37:44.935 [2024-12-06 01:15:17.524092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.935 [2024-12-06 01:15:17.524127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.935 qpair failed and we were unable to recover it. 00:37:44.935 [2024-12-06 01:15:17.524234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.935 [2024-12-06 01:15:17.524268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.935 qpair failed and we were unable to recover it. 00:37:44.935 [2024-12-06 01:15:17.524407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.935 [2024-12-06 01:15:17.524440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.935 qpair failed and we were unable to recover it. 00:37:44.935 [2024-12-06 01:15:17.524557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.936 [2024-12-06 01:15:17.524593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.936 qpair failed and we were unable to recover it. 00:37:44.936 [2024-12-06 01:15:17.524727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.936 [2024-12-06 01:15:17.524760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.936 qpair failed and we were unable to recover it. 00:37:44.936 [2024-12-06 01:15:17.524873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.936 [2024-12-06 01:15:17.524907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.936 qpair failed and we were unable to recover it. 00:37:44.936 [2024-12-06 01:15:17.525012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.936 [2024-12-06 01:15:17.525046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.936 qpair failed and we were unable to recover it. 00:37:44.936 [2024-12-06 01:15:17.525187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.936 [2024-12-06 01:15:17.525220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.936 qpair failed and we were unable to recover it. 00:37:44.936 [2024-12-06 01:15:17.525324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.936 [2024-12-06 01:15:17.525361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.936 qpair failed and we were unable to recover it. 00:37:44.936 [2024-12-06 01:15:17.525510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.936 [2024-12-06 01:15:17.525547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.936 qpair failed and we were unable to recover it. 00:37:44.936 [2024-12-06 01:15:17.525650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.936 [2024-12-06 01:15:17.525682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.936 qpair failed and we were unable to recover it. 00:37:44.936 [2024-12-06 01:15:17.525787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.936 [2024-12-06 01:15:17.525832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.936 qpair failed and we were unable to recover it. 00:37:44.936 [2024-12-06 01:15:17.525956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.936 [2024-12-06 01:15:17.525990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.936 qpair failed and we were unable to recover it. 00:37:44.936 [2024-12-06 01:15:17.526122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.936 [2024-12-06 01:15:17.526156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.936 qpair failed and we were unable to recover it. 00:37:44.936 [2024-12-06 01:15:17.526275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.936 [2024-12-06 01:15:17.526308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.936 qpair failed and we were unable to recover it. 00:37:44.936 [2024-12-06 01:15:17.526471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.936 [2024-12-06 01:15:17.526506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.526613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.526651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.526786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.526825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.526959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.526993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.527176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.527215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.527351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.527385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.527497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.527529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.527662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.527694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.527832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.527867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.527979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.528012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.528124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.528158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.528298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.528331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.528450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.528483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.528583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.528619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.528784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.528826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.528966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.529000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.529122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.529156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.529315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.529349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.529469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.529502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.529613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.529649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.529800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.529850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.529964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.529998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.530169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.530202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.530337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.530371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.530531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.530564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.530669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.530702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.530809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.530850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.530961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.530994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.531114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.531149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.531281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.531316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.531417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.531449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.531567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.531601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.531740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.531782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.531903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.531938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.532075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.532113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.532277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.532309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.532450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.532485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.532591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.532626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.532738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.532771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.532889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.532924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.533040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.533074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.533205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.937 [2024-12-06 01:15:17.533238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.937 qpair failed and we were unable to recover it. 00:37:44.937 [2024-12-06 01:15:17.533345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.533379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.533488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.533522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.533666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.533700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.533860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.533894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.534004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.534037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.534169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.534203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.534340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.534379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.534494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.534527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.534634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.534668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.534771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.534805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.534927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.534961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.535093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.535128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.535231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.535264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.535392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.535425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.535534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.535569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.535711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.535744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.535852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.535887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.536020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.536054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.536159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.536193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.536304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.536338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.536475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.536508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.536607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.536640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.536766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.536805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.536926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.536987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.537089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.537122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.537268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.537302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.537414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.537457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.537570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.537604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.537748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.537781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.537896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.537931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.538073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.538107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.538243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.538276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.538377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.538411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.538524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.538557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.538670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.538705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.538828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.538861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.538977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.539010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.539174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.539207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.539344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.539381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.539498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.539536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.539671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.539705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.539818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.539853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.539998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.540033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.540213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.540268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.540385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.540422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.540536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.540571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.540674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.540708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.540810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.540850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.540962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.540996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:44.938 [2024-12-06 01:15:17.541134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.938 [2024-12-06 01:15:17.541168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.938 qpair failed and we were unable to recover it. 00:37:45.208 [2024-12-06 01:15:17.541278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-12-06 01:15:17.541312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-12-06 01:15:17.541449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-12-06 01:15:17.541483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-12-06 01:15:17.541590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-12-06 01:15:17.541623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-12-06 01:15:17.541731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-12-06 01:15:17.541764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-12-06 01:15:17.541879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-12-06 01:15:17.541914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-12-06 01:15:17.542018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-12-06 01:15:17.542052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-12-06 01:15:17.542159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-12-06 01:15:17.542192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-12-06 01:15:17.542300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-12-06 01:15:17.542335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-12-06 01:15:17.542437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-12-06 01:15:17.542471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-12-06 01:15:17.542593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-12-06 01:15:17.542630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.542734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.542768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.542880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.542915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.543027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.543064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.543198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.543231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.543328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.543361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.543508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:37:45.209 [2024-12-06 01:15:17.543685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.543732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.543878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.543914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.544014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.544059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.544165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.544200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.544334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.544367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.544473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.544507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.544614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.544650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.544769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.544824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.544962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.545012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.545120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.545155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.545286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.545320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.545417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.545451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.545564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.545598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.545724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.545759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.545920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.545969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.546095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.546131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.546244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.546278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.546382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.546416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.546520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.546553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.546663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.546698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.546803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.546856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.547019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.547067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.547213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.547250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.547361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.547396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.547508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.547542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.547656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.547692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.547832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.547869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.547974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.548014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.548143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.548191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.548306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.548344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.548483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.548518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.548620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.548654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.548780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.548838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.548985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.549021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.549132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.549167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.549286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.549320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.549460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.549497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.549645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.549683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.549797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.549841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.549950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.549986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.550128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.550168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.550292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.550326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.550430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.550465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.550570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.550607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.550730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.550765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.550930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.550969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.551099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-12-06 01:15:17.551132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-12-06 01:15:17.551284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.551317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.551458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.551492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.551626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.551661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.551769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.551803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.551966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.552002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.552116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.552151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.552256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.552289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.552443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.552476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.552600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.552638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.552772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.552808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.552946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.552979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.553136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.553170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.553307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.553343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.553455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.553487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.553606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.553639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.553782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.553827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.553951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.553984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.554094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.554131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.554237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.554270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.554403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.554437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.554580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.554618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.554801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.554857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.554979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.555019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.555163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.555199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.555308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.555343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.555513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.555549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.555653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.555688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.555830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.555865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.556035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.556083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.556236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.556271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.556411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.556445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.556552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.556586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.556709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.556743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.556880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.556917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.557035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.557074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.557249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.557297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.557415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.557452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.557566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.557600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.557730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.557764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.557890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.557925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.558075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.558109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.558257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.558291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.558426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.558462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.558567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.558602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.558742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.558782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.558929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.558964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.559071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.559105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.559231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.559267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.559408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.559444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.559550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.559584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.559702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.559737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.559852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-12-06 01:15:17.559886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-12-06 01:15:17.560013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.560062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.560201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.560236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.560342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.560376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.560488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.560521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.560625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.560660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.560797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.560838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.560980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.561016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.561129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.561162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.561275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.561313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.561449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.561484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.561607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.561645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.561754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.561801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.561973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.562008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.562147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.562197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.562422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.562459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.562599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.562634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.562853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.562888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.563002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.563036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.563169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.563202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.563314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.563351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.563490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.563525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.563666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.563700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.563841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.563876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.564006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.564038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.564145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.564180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.564318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.564352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.564490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.564523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.564661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.564695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.564834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.564871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.565006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.565040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.565152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.565185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.565316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.565349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.565487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.565521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.565627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.565659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.565776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.565811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.566011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.566059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.566177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.566215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.566330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.566365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.566491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.566525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.566654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.566687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.566812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.566854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.566989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.567025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.567157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.567190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.567296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.567329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.567448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.567482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.567591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.567626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.567741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.567777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.567902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.567949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.568079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.568134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.568294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.568330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.568447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.568481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.568591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.568624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.568737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.568771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.568879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-12-06 01:15:17.568912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-12-06 01:15:17.569029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.569063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.569171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.569207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.569354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.569388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.569492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.569526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.569639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.569679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.569808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.569865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.570016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.570052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.570192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.570226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.570372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.570406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.570514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.570547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.570684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.570720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.570853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.570887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.571000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.571034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.571140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.571174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.571315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.571348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.571456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.571490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.571649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.571683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.571872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.571922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.572052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.572099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.572271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.572306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.572423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.572457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.572579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.572617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.572732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.572767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.572954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.572993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.573129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.573165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.573269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.573302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.573445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.573479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.573591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.573624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.573758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.573793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.573925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.573972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.574088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.574125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.574227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.574262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.574427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.574460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.574573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.574607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.574714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.574755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.574871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.574907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.575016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.575049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.575151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.575184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.575321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.575359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.575493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.575526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.575670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.575704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.575829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.575864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.576043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.576092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.576235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.576272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.576411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.576445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.576544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.576578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-12-06 01:15:17.576711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-12-06 01:15:17.576744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.576857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.576893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.577030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.577078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.577220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.577255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.577366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.577401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.577540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.577575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.577679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.577712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.577830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.577866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.577986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.578021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.578171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.578221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.578333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.578368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.578503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.578537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.578668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.578701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.578813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.578852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.578959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.578992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.579100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.579141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.579283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.579317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.579439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.579473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.579580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.579619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.579748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.579798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.579925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.579962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.580067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.580102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.580213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.580248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.580379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.580411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.580512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.580546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.580657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.580692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.580878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.580927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.581076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.581113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.581229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.581266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.581410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.581445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.581551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.581586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.581715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.581763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.581922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.581958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.582072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.582105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.582241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.582274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.582410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.582445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.582568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.582604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.582710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.582745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.582866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.582901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.583038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.583072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.583185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.583220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.583436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.583471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.583600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.583635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.583767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.583825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.584010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.584057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.584178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.584214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.584324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.584359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.584481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.584514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.584624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.584657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.584821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.584856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.584967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.585017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.585178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.585212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.585314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.585349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.585498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.585531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.585662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-12-06 01:15:17.585695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-12-06 01:15:17.585807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.585852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.585959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.585993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.586103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.586142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.586249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.586289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.586402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.586436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.586549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.586582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.586701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.586738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.586849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.586888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.586988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.587022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.587171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.587205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.587320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.587353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.587490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.587523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.587670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.587719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.587917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.587956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.588073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.588130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.588243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.588279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.588391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.588425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.588644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.588678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.588831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.588867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.589003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.589037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.589207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.589245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.589401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.589441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.589576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.589629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.589762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.589809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.589968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.590002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.590139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.590192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.590311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.590350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.590507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.590546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.590689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.590729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.590899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.590935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.591102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.591177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.591299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.591334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.591444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.591478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.591608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.591643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.591751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.591786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.592008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.592043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.592191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.592253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.592401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.592439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.592594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.592634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.592756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.592795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.592980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.593034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.593211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.593274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.593400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.593439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.593569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.593604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.593702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.593736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.593888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.593925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.594051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.594108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.594289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.594327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.594446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.594480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.594611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.594645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.594808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.594849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.594986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.595022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.595158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.595193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-12-06 01:15:17.595304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-12-06 01:15:17.595339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.595459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.595494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.595635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.595669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.595785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.595836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.596013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.596048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.596188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.596223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.596363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.596397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.596502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.596535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.596647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.596680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.596828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.596862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.596966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.596999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.597121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.597156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.597295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.597328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.597440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.597475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.597647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.597682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.597841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.597876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.597991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.598026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.598148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.598183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.598334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.598388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.598596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.598649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.598752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.598787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.598909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.598943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.599080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.599113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.599219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.599252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.599385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.599418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.599513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.599547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.599646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.599679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.599824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.599866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.600002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.600036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.600184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.600236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.600411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.600448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.600651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.600690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.600835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.600897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.601006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.601040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.601215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.601286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.601514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.601552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.601688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.601723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.601843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.601878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.602004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.602038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.602213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.602247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.602381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.602414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.602533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.602566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.602713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.602748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.602916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.602964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.603081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.603117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.603411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.603469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.603585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.603622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.603774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.603813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.603948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-12-06 01:15:17.603981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-12-06 01:15:17.604084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.604118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.604278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.604316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.604485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.604522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.604689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.604722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.604842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.604876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.605019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.605053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.605233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.605271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.605388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.605439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.605596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.605629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.605759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.605804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.605924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.605957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.606053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.606086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.606192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.606225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.606344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.606378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.606530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.606579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.606734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.606771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.606931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.606966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.607104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.607137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.607272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.607311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.607452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.607486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.607651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.607686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.607833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.607881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.608015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.608063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.608220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.608259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.608427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.608462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.608574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.608609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.608742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.608777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.608938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.608974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.609140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.609197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.609366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.609402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.609535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.609569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.609706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.609741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.609894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.609929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.610044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.610080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.610218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.610252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.610416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.610450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.610560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.610594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.610701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.610735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.610897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.610946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.611100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.611135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.611246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.611297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.611410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.611447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.611625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.611680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.611809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.611849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.611987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.612022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.612167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.612201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.612377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.612414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.612556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.612597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.612763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.612808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.612970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.613018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.613235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.613295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.613525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.613582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.613766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.613808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.613953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.613987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.614162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.614196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.614375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.614434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-12-06 01:15:17.614546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-12-06 01:15:17.614583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.614723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.614757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.614870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.614910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.615044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.615078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.615216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.615254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.615414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.615447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.615547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.615580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.615713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.615747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.615880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.615929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.616103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.616157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.616343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.616383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.616534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.616571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.616731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.616765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.616916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.616950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.617083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.617119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.617280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.617332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.617540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.617577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.617747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.617783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.617934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.617968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.618106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.618141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.618248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.618301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.618418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.618457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.618595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.618648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.618764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.618808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.618935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.618969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.619126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.619175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.619379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.619436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.619606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.619661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.619774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.619809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.619986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.620020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.620157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.620191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.620330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.620365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.620503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.620536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.620669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.620703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.620838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.620872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.621001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.621035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.621143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.621176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.621319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.621355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.621466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.621501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.621669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.621703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.621840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.621874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.622012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.622046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.622164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.622203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.622376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.622411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.622524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.622557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.622693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.622726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.622852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.622887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.622994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.623028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.623164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.623236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.623400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.623437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.623579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.623616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.623789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.623870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.624058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.624107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.624325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.624384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.624565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.624624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-12-06 01:15:17.624761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-12-06 01:15:17.624795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.625007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.625058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.625304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.625363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.625483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.625520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.625672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.625708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.625877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.625912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.626027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.626062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.626210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.626261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.626369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.626404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.626617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.626651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.626773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.626826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.626972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.627008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.627195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.627233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.627452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.627506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.627727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.627786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.627950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.627984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.628204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.628275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.628393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.628441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.628583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.628618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.628754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.628788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.628918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.628953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.629080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.629123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.629233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.629267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.629410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.629458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.629610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.629646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.629783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.629838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.630007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.630041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.630154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.630193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.630330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.630363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.630509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.630542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.630676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.630710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.630850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.630886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.630999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.631032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.631161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.631194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.631294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.631336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.631441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.631474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.631621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.631678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.631833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.631870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.632086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.632124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.632283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.632316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.632487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.632522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.632695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.632729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.632877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.632913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.633073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.633131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.633251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.633289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.633432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.633467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.633628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.633663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.633787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.633843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.633983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.634019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-12-06 01:15:17.634199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-12-06 01:15:17.634247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.634367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.634403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.634544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.634578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.634690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.634725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.634867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.634901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.635034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.635082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.635232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.635268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.635376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.635411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.635573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.635606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.635717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.635752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.635885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.635934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.636104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.636154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.636266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.636305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.636474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.636540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.636705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.636739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.636892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.636940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.637064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.637106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.637243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.637279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.637412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.637455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.637633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.637671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.637840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.637874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.638010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.638043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.638157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.638190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.638293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.638326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.638432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.638465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.638570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.638604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.638726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.638776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.638929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.638965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.639107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.639141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.639301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.639338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.639459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.639496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.639694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.639732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.639920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.639955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.640089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.640122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.640251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.640285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.640390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.640423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.640581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.640614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.640782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.640838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.640961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.640997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.641137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.641171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.641273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.641306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.641445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.641479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.641632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.641681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.641806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.641852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.642005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.642041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.642164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.642219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.642365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.642402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.642603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.642641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.642786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.642851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.642957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.642991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.643129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.643162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.643309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.643344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.643505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.643554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.643725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.643763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.643931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.643988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.644144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-12-06 01:15:17.644181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-12-06 01:15:17.644296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.644332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.644468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.644503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.644611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.644652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.644826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.644876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.645052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.645089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.645221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.645269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.645469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.645533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.645682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.645721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.645900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.645934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.646067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.646119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.646310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.646344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.646495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.646530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.646676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.646711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.646849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.646898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.647057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.647109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.647252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.647289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.647414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.647450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.647611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.647645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.647784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.647832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.647995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.648030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.648186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.648223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.648367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.648402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.648539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.648574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.648733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.648767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.648901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.648935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.649034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.649067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.649207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.649241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.649347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.649381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.649486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.649519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.649657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.649691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.649852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.649901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.650016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.650053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.650182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.650217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.650383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.650418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.650633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.650667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.650810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.650851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.650992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.651026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.651181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.651230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.651348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.651385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.651543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.651592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.651762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.651802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.651951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.651986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.652101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.652160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.652294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.652347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.652496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.652534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.652684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.652726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.652881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.652918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.653068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.653116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.653288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.653328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.653532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.653570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.653734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.653768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.653912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.653947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.654102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.654151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.654301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-12-06 01:15:17.654337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-12-06 01:15:17.654479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.654514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.654649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.654683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.654858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.654894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.655050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.655107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.655223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.655259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.655400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.655433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.655538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.655572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.655686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.655719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.655858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.655892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.656024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.656058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.656193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.656227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.656326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.656360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.656483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.656517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.656646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.656681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.656811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.656856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.656995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.657032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.657192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.657240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.657358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.657394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.657534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.657570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.657712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.657746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.657915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.657964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.658079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.658121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.658256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.658289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.658397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.658432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.658535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.658569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.658711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.658747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.658899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.658934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.659041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.659075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.659220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.659260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.659390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.659424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.659587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.659620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.659720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.659755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.659918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.659967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.660120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.660174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.660384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.660425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.660668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.660730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.660956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.661002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.661177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.661212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.661342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.661381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.661569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.661628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.661752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.661787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.661947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.661982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.662141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.662179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.662386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.662425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.662539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.662579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.662742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.662795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.662967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.663015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.663185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.663232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.663399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-12-06 01:15:17.663460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-12-06 01:15:17.663723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.663783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.663937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.663972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.664108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.664160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.664361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.664426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.664614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.664704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.664838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.664889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.665060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.665114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.665322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.665362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.665574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.665609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.665808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.665869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.665998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.666033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.666146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.666181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.666318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.666353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.666504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.666552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.666677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.666714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.666856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.666891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.667055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.667088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.667227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.667262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.667489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.667525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.667691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.667730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.667850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.667885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.668043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.668077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.668197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.668233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.668366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.668401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.668562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.668597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.668747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.668794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.668961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.669009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.669167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.669202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.669304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.669338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.669471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.669504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.669664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.669702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.669870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.669905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.670031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.670078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.670235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.670272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.670385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.670419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.670582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.670617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.670750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.670784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.670925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.670960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.671070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.671113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.671276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.671309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.671402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.671434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.671574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.671608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.671743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.671776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.671925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.671957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.672104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.672138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.672291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.672339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.672490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.672526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.672666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.672702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.672844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.672878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.672989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.673022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.673132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.673165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-12-06 01:15:17.673306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-12-06 01:15:17.673342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.673476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.673510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.673673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.673705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.673810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.673854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.674033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.674082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.674242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.674278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.674418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.674452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.674578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.674612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.674747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.674791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.674949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.674984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.675099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.675134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.675242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.675276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.675431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.675464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.675602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.675635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.675772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.675808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.675948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.675981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.676086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.676123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.676231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.676265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.676378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.676414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.676549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.676582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.676685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.676718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.676837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.676872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.677002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.677049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.677170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.677205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.677338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.677371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.677503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.677538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.677664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.677698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.677845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.677882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.677989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.678024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.678161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.678210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.678325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.678360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.678502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.678536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.678644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.678687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.678832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.678866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.678969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.679002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.679112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.679150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.679258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.679291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.679487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.679522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.679662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.679697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.679842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.679882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.679998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.680032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.680146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.680180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.680313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.680346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.680450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.680483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.680617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.680653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.680770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.680806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.680924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.680957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.681063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.681106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.681216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.681252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.681369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.681404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.681511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.681546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.681656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.681691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.681840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.681889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.682030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-12-06 01:15:17.682065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-12-06 01:15:17.682190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.682238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.682359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.682393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.682513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.682546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.682657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.682692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.682799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.682841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.682955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.682988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.683139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.683174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.683311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.683345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.683494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.683532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.683647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.683680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.683833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.683881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.684006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.684042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.684156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.684191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.684305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.684340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.684456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.684489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.684608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.684644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.684762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.684796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.684940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.684974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.685183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.685221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.685341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.685379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.685498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.685531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.685665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.685703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.685849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.685884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.685993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.686026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.686177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.686225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.686363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.686398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.686545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.686579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.686689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.686722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.686860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.686898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.687036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.687069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.687201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.687234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.687350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.687384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.687527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.687560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.687666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.687700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.687846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.687881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.687993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.688026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.688141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.688175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.688306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.688339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.688451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.688489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.688622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.688656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.688781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.688834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.688995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.689044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.689200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.689237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.689359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.689395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.689533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.689568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.689702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.689737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.689853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.689889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.690033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-12-06 01:15:17.690071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-12-06 01:15:17.690188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.690223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.690330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.690364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.690478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.690512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.690619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.690653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.690793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.690837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.690980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.691015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.691177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.691225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.691340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.691376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.691521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.691557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.691666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.691701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.691886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.691921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.692035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.692069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.692180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.692215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.692336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.692376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.692516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.692583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.692695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.692728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.692863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.692897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.693009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.693042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.693160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.693196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.693322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.693370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.693525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.693562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.693676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.693711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.693850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.693886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.694009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.694058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.694205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.694240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.694343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.694377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.694490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.694525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.694667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.694701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.694823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.694859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.695000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.695036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.695154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.695191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.695325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.695359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.695518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.695552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.695688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.695722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.695841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.695876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.695989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.696024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.696155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.696190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.696326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.696361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.696573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.696608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.696711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.696746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.696894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.696943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.697058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.697101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.697208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.697243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.697352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.697386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.697510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.697544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.697657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.697692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.697835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.697869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.697976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.698013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.698129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.698163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.698264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.698297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.698430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.698462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.698622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.698657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.698786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.698842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.698960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-12-06 01:15:17.698999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-12-06 01:15:17.699111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.699144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.699248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.699281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.699393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.699426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.699534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.699568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.699704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.699737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.699853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.699890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.700008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.700040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.700151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.700186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.700300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.700335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.700493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.700529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.700664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.700698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.700832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.700865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.700996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.701047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.701199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.701233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.701342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.701375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.701481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.701516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.701698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.701746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.701894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.701944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.702066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.702106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.702259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.702312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.702419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.702454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.702596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.702629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.702738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.702775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.702932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.702966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.703094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.703127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.703314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.703377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.703492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.703526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.703662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.703697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.703837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.703872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.704043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.704086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.704239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.704284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.704400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.704437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.704605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.704638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.704780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.704819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.704972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.705029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.705173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.705226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.705360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.705394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.705507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.705542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.705685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.705721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.705836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.705879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.705986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.706019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.706147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.706184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.706325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.706358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.706570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.706605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.706722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.706755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.706897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.706931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.707038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.707071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.707219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.707252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.707418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.707452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.707558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.707596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.707733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.707768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.707909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.707959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.708104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.708152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.708274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.708311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.708449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.708483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.708628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-12-06 01:15:17.708663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-12-06 01:15:17.708780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.708833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.708976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.709013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.709132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.709169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.709297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.709335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.709493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.709552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.709690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.709724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.709860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.709894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.710000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.710033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.710172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.710206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.710348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.710381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.710500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.710533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.710660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.710710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.710849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.710919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.711090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.711144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.711283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.711345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.711477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.711511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.711624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.711657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.711765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.711808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.711916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.711966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.712088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.712133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.712282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.712318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.712498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.712536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.712700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.712753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.712935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.712994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.713163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.713217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.713407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.713470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.713627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.713662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.713775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.713809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.713934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.713970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.714112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.714166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.714404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.714466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.714604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.714669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.714780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.714822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.714940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.714975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.715090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.715128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.715269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.715332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.715493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.715553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.715698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.715733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.715858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.715895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.716020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.716059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.716219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.716257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.716375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.716413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.716562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.716597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.716732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.716769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.716928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.716981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.717132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.717172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.717296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.717330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.717486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.717521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.717647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.717681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.717821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.717887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.718011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.718050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.718237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.718276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.718386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.718425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.718557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.718611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.718803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-12-06 01:15:17.718867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-12-06 01:15:17.719024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.719077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.719298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.719359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.719472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.719510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.719669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.719702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.719806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.719848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.719974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.720025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.720148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.720184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.720301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.720338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.720448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.720505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.720624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.720663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.720829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.720900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.721059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.721100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.721245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.721284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.721408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.721447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.721578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.721632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.721756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.721804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.721982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.722038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.722168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.722222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.722368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.722402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.722538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.722571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.722684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.722719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.722880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.722929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.723086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.723149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.723337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.723393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.723520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.723555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.723662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.723696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.723838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.723910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.724072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.724130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.724341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.724399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.724531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.724565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.724685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.724722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.724874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.724923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.725092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.725131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.725257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.725295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.725441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.725475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.725624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.725658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.725780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.725835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.726035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.726089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.726203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.726237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.726385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.726437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.726562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.726595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.726746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.726803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.726963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.727018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.727193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.727228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.727352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.727389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.727515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.727579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.727705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.727743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.727905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.727960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.728167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.728239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.728395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.728462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-12-06 01:15:17.728599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-12-06 01:15:17.728634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.728759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.728793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.728947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.728981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.729090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.729153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.729297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.729335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.729457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.729494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.729625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.729661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.729779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.729821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.729976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.730031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.730181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.730215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.730373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.730426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.730583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.730618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.730763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.730805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.730960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.731013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.731151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.731192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.731321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.731356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.731485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.731518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.731647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.731681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.731824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.731864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.732007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.732045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.732180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.732213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.732343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.732377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.732510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.732544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.732660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.732697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.732833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.732868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.733020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.733069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.733219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.733255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.733371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.733407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.733517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.733553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.733696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.733730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.733899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.733938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.734087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.734151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.734283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.734334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.734476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.734513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.734696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.734730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.734852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.734887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.735044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.735101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.735229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.735298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.735491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.735559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.735674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.735708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.735828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.735862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.736017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.736054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.736178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.736217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.736409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.736464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.736579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.736624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.736730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.736767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.736892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.736928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.737072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.737106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.737230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.737278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.737440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.737501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.737601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.737635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.737765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.737799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.738052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.738107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.738259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.738318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.738462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.738514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.738629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.738663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.738795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-12-06 01:15:17.738836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-12-06 01:15:17.738941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.738974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.739076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.739110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.739220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.739253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.739387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.739420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.739554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.739587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.739724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.739756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.739887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.739937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.740070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.740117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.740247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.740296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.740422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.740459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.740598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.740634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.740743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.740777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.740955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.740994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.741103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.741136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.741247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.741281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.741401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.741435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.741554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.741592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.741732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.741780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.741932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.741973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.742122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.742174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.742345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.742377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.742553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.742596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.742753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.742788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.742959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.743012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.743206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.743274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.743416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.743458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.743591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.743627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.743772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.743807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.743969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.744023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.744182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.744236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.744394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.744452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.744574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.744606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.744747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.744780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.744900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.744934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.745102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.745135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.745252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.745286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.745412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.745448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.745595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.745644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.745834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.745872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.746051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.746115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.746282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.746347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.746548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.746608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.746760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.746795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.746962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.747012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.747155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.747208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.747366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.747428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.747605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.747663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.747809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.747869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.748048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.748114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.748275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.748316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.748459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.748511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.748640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.748679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.748811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.748851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.748969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.749004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.749187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.749256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.749483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.749522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.749705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.749743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.749958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-12-06 01:15:17.749994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-12-06 01:15:17.750104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.750138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.750317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.750354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.750529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.750593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.750729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.750769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.750920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.750957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.751068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.751109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.751273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.751311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.751456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.751494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.751731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.751805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.751970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.752018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.752177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.752227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.752458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.752519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.752645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.752682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.752861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.752895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.752999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.753032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.753188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.753225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.753365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.753417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.753587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.753642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.753831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.753887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.754031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.754066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.754225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.754264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.754430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.754488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.754687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.754738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.754875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.754910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.755013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.755047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.755220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.755254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.755366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.755401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.755560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.755593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.755731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.755766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.755896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.755944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.756132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.756173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.756317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.756356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.756499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.756537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.756709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.756747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.756925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.756974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.757092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.757127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.757256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.757290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.757394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.757427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.757581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.757618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.757765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.757813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.757964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.758001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.758117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.758152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.758268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.758317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.758513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.758580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.758706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.758759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.758956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.758991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.759166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.759224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-12-06 01:15:17.759488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-12-06 01:15:17.759547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.759688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.759725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.759880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.759915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.760036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.760071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.760263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.760322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.760458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.760516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.760721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.760758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.760913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.760949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.761126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.761180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.761389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.761425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.761614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.761671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.761870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.761905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.762018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.762052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.762199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.762233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.762366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.762399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.762544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.762596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.762734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.762788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.762956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.763005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.763176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.763224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.763392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.763442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.763592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.763629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.763805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.763866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.763993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.764026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.764140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.764197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.764378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.764416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.764546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.764598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.764749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.764786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.764966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.765015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.765186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.765235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.765428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.765483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.765701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.765754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.765885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.765919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.766105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.766158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.766304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.766389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.766627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.766689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.766876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.766912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.767019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.767053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.767220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.767271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.767484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.767539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.767688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.767721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.767905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.767962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.768068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.768102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.768242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.768276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.768438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.768472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.768600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.768633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.768753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.768802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.768972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.769021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.769177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.769225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.769427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.769494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.769710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.769767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.769931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.769965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.770175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.770233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.770393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.770448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.770620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.770680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.770834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-12-06 01:15:17.770868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-12-06 01:15:17.770974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.771007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.771168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.771217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.771388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.771430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.771623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.771701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.771831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.771882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.771989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.772022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.772155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.772188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.772293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.772328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.772479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.772522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.772703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.772744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.772922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.772958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.773107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.773142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.773242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.773296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.773472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.773526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.773680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.773720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.773852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.773903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.774041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.774074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.774217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.774252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.774384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.774417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.774529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.774565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.774721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.774767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.774956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.775004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.775127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.775162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.775297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.775330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.775433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.775467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.775583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.775617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.775761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.775803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.775967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.776000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.776138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.776171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.776274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.776308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.776465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.776498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.776629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.776681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.776852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.776886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.777014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.777048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.777178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.777212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.777366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.777403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.777581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.777619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.777741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.777779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.777918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.777952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.778052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.778086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.778246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.778280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.778442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.778475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.778629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.778665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.778823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.778876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.778982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.779015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.779179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.779212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.779319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.779353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.779497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.779533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.779672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.779714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.779897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.779932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.780070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.780112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.780268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.780302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.780406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.780439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.780570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.780603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.780787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-12-06 01:15:17.780848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-12-06 01:15:17.781022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.781059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.781277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.781312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.781423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.781459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.781611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.781646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.781782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.781824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.781990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.782025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.782181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.782229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.782383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.782419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.782537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.782571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.782710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.782744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.782871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.782906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.783038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.783072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.783218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.783253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.783385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.783419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.783521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.783555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.783658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.783691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.783834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.783869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.784017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.784065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.784205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.784241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.784401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.784435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.784585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.784620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.784756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.784789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.784942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.784975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.785124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.785159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.785271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.785305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.785441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.785475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.785608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.785642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.785754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.785787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.785898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.785932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.786066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.786104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.786209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.786244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.786377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.786411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.786568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.786602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.786712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.786751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.786915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.786949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.787136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.787184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.787324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.787360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.787530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.787564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.787682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.787724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.787865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.787900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.788016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.788050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.788153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.788188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.788316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.788350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.788485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.788518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.788653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.788688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.788857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.788907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.789063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.789108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.789253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.789288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.789425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-12-06 01:15:17.789460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-12-06 01:15:17.789620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.789655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.789767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.789806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.789952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.789987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.790090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.790126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.790262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.790295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.790404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.790438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.790544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.790580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.790740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.790776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.790918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.790966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.791077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.791113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.791250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.791284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.791399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.791445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.791548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.791583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.791679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.791713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.791930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.791965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.792126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.792161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.792295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.792330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.792490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.792525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.792658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.792692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.792847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.792884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.793010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.793058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.793253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.793288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.793427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.793461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.793621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.793654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.793757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.793805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.793976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.794011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.794145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.794180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.794312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.794346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.794513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.794548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.794712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.794759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.794886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.794921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.795078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.795112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.795214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.795248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.795412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.795445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.795556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.795590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.795722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.795756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.795935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.795971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.796160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.796208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.796363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.796399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.796500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.796534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.796668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.796702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.796879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.796929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.797072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.797108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.797219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.797254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.797399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.797433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.797537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.797572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.797736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.797771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.797926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.797961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.798090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.798123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.798254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.798288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.798422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.798455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.798565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.798598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.798702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.798737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.798884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.798919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-12-06 01:15:17.799067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-12-06 01:15:17.799102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.799250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.799286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.799421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.799457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.799574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.799606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.799745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.799780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.799927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.799963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.800101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.800135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.800274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.800309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.800441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.800475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.800638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.800672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.800840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.800881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.801001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.801050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.801197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.801233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.801395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.801429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.801562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.801597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.801768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.801806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.801946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.801982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.802085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.802124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.802262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.802295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.802466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.802499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.802631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.802665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.802794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.802833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.802962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.803010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.803148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.803184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.803321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.803356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.803487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.803522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.803656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.803691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.803857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.803892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.804003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.804039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.804195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.804229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.804357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.804391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.804556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.804592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.804711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.804749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.804912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.804947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.805054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.805098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.805265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.805298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.805427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.805460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.805598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.805631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.805771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.805806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.805973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.806007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.806183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.806217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.806351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.806385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.806517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.806550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.806685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.806719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.806921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.806957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.807103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.807136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.807259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.807293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.807424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.807457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.807566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.807599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.807733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.807767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.807889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.807929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.808061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.808099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.808215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.808249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.808351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.808385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.808518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.808551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.808653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.808687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.808827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-12-06 01:15:17.808862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-12-06 01:15:17.808992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.809025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.809170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.809203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.809311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.809344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.809477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.809510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.809669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.809718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.809871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.809906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.810039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.810074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.810217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.810251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.810376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.810409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.810521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.810554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.810688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.810722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.810849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.810899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.811054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.811101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.811209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.811243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.811387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.811422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.811525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.811559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.811711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.811760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.811928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.811963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.812107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.812141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.812304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.812338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.812443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.812476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.812578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.812612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.812783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.812828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.812948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.812997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.813152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.813188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.813293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.813327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.813464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.813497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.813626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.813660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.813803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.813847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.813977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.814026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.814178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.814215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.814326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.814362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.814509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.814543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.814639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.814678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.814822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.814856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.814997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.815032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.815194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.815228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.815365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.815399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.815511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.815545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.815649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.815683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.815833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.815868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.815972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.816006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.816144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.816178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.816338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.816372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.816507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.816541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.816674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.816707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.816878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.816928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.817075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.817122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.817277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.817312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.817446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.817480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.237 [2024-12-06 01:15:17.817591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.237 [2024-12-06 01:15:17.817625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.237 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.817760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.817793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.817937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.817972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.818118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.818167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.818337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.818374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.818493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.818529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.818691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.818725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.818861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.818896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.819063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.819098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.819240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.819274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.819380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.819414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.819545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.819579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.819741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.819774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.819923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.819956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.820087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.820121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.820216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.820250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.820378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.820411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.820543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.820576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.820704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.820740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.820875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.820908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.821060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.821097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.821240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.821275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.821410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.821445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.821610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.821650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.821798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.821847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.821985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.822029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.822131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.822164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.822266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.822300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.822399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.822432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.822532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.822566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.822705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.822739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.822872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.822905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.823056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.823102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.823271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.823305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.823440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.823473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.823610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.823643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.823772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.823805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.823932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.823965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.824080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.824114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.824224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.824256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.824424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.824456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.824612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.824646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.824785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.824825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.824984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.825017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.825178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.825210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.825313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.825345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.825456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.825488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.825625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.825659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.825822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.825870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.826022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.826058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.826202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.826236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.826400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.826433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.826570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.826603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.826740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.826772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.826907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.826953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.827076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.827112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.238 qpair failed and we were unable to recover it. 00:37:45.238 [2024-12-06 01:15:17.827278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.238 [2024-12-06 01:15:17.827311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.827432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.827465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.827600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.827633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.827754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.827801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.827935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.827969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.828079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.828112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.828220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.828253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.828397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.828435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.828541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.828574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.828698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.828745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.828890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.828937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.829070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.829114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.829236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.829276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.829433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.829483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.829641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.829692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.829794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.829834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.829952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.829989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.830121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.830158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.830296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.830333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.830515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.830551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.830667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.830704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.830842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.830894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.831041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.831077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.831189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.831225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.831408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.831463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.831608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.831641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.831797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.831867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.832016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.832050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.832211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.832247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.832357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.832394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.832506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.832556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.832674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.832707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.832887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.832938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.833093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.833130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.833253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.833290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.833481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.833514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.833649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.833681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.833823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.833874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.834015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.834056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.834223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.834260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.834391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.834424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.834591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.834624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.834731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.834768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.834909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.834962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.835140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.835174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.835313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.835345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.835480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.835513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.835648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.835685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.835849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.835897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.836039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.836093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.836225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.836263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.836449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.836482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.836614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.836647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.836768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.836803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.836950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.837006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.837117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.837151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.837308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.837344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.239 [2024-12-06 01:15:17.837504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.239 [2024-12-06 01:15:17.837538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.239 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.837661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.837695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.837808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.837848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.837985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.838017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.838124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.838157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.838288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.838320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.838460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.838497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.838623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.838656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.838767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.838813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.838940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.838974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.839123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.839169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.839276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.839310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.839446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.839481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.839585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.839618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.839730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.839763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.839925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.839972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.840138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.840196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.840365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.840416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.840545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.840578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.840685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.840718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.840812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.840856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.841088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.841142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.841251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.841285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.841420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.841452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.841581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.841614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.841751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.841787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.841921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.841982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.842140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.842175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.842318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.842352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.842491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.842523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.842664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.842702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.842932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.842984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.843149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.843182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.843280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.843313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.843438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.843471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.843622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.843668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.843789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.843838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.843960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.843995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.844138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.844172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.844286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.844319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.844447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.844480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.844616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.844649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.844798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.844852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.844973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.845008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.845152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.845185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.845290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.845322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.845434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.845483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.845604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.845638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.240 [2024-12-06 01:15:17.845760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.240 [2024-12-06 01:15:17.845794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.240 qpair failed and we were unable to recover it. 00:37:45.241 [2024-12-06 01:15:17.845963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.241 [2024-12-06 01:15:17.846014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.241 qpair failed and we were unable to recover it. 00:37:45.241 [2024-12-06 01:15:17.846164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.241 [2024-12-06 01:15:17.846219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.241 qpair failed and we were unable to recover it. 00:37:45.241 [2024-12-06 01:15:17.846377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.241 [2024-12-06 01:15:17.846430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.241 qpair failed and we were unable to recover it. 00:37:45.241 [2024-12-06 01:15:17.846547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.241 [2024-12-06 01:15:17.846582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.241 qpair failed and we were unable to recover it. 00:37:45.241 [2024-12-06 01:15:17.846725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.241 [2024-12-06 01:15:17.846759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.241 qpair failed and we were unable to recover it. 00:37:45.241 [2024-12-06 01:15:17.846893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.241 [2024-12-06 01:15:17.846940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.241 qpair failed and we were unable to recover it. 00:37:45.241 [2024-12-06 01:15:17.847077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.241 [2024-12-06 01:15:17.847124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.241 qpair failed and we were unable to recover it. 00:37:45.241 [2024-12-06 01:15:17.847265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.241 [2024-12-06 01:15:17.847299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.827 qpair failed and we were unable to recover it. 00:37:45.827 [2024-12-06 01:15:18.229973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.827 [2024-12-06 01:15:18.230059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.827 qpair failed and we were unable to recover it. 00:37:45.827 [2024-12-06 01:15:18.230280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.827 [2024-12-06 01:15:18.230315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.827 qpair failed and we were unable to recover it. 00:37:45.827 [2024-12-06 01:15:18.230481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.827 [2024-12-06 01:15:18.230515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.827 qpair failed and we were unable to recover it. 00:37:45.827 [2024-12-06 01:15:18.230668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.827 [2024-12-06 01:15:18.230701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.827 qpair failed and we were unable to recover it. 00:37:45.827 [2024-12-06 01:15:18.230831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.827 [2024-12-06 01:15:18.230875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.827 qpair failed and we were unable to recover it. 00:37:45.827 [2024-12-06 01:15:18.231027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.827 [2024-12-06 01:15:18.231060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.827 qpair failed and we were unable to recover it. 00:37:45.827 [2024-12-06 01:15:18.231291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.827 [2024-12-06 01:15:18.231324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.827 qpair failed and we were unable to recover it. 00:37:45.827 [2024-12-06 01:15:18.231491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.827 [2024-12-06 01:15:18.231523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.827 qpair failed and we were unable to recover it. 00:37:45.827 [2024-12-06 01:15:18.231692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.827 [2024-12-06 01:15:18.231724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.827 qpair failed and we were unable to recover it. 00:37:45.827 [2024-12-06 01:15:18.231884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.827 [2024-12-06 01:15:18.231918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.827 qpair failed and we were unable to recover it. 00:37:45.827 [2024-12-06 01:15:18.232064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.827 [2024-12-06 01:15:18.232097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.827 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.232265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.232299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.232409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.232442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.232610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.232649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.232762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.232796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.232978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.233012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.233126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.233160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.233294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.233328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.233557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.233591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.233828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.233872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.234016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.234050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.234221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.234254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.234398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.234432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.234570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.234604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.234795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.234836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.234949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.234983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.235141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.235211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.235396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.235433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.235582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.235618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.235766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.235801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.235969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.236004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.236179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.236214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.236379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.236414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.236539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.236572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.236753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.236788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.236937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.236994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.237139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.237175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.237317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.237352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.237489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.237522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.237630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.237663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.237805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.237847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.238000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.238034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.238184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.238218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.238350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.238386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.238494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.238528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.238656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.238690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.238856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.238890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.239064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.828 [2024-12-06 01:15:18.239114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.828 qpair failed and we were unable to recover it. 00:37:45.828 [2024-12-06 01:15:18.239264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.239301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.239444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.239480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.239655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.239690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.239831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.239877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.240016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.240051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.240224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.240258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.240404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.240438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.240546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.240580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.240706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.240740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.240861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.240895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.241030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.241064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.241174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.241211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.241353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.241389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.241562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.241612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.241789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.241829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.241959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.241993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.242104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.242139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.242270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.242303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.242431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.242465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.242566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.242600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.242741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.242774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.242895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.242929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.243086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.243123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.243265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.243300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.243501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.243536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.243677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.243712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.243887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.243936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.244053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.244099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.244210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.244245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.244422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.244456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.244594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.244627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.244739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.244773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.244941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.244980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.245117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.245152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.245312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.245347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.245485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.245520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.245655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.245690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.829 [2024-12-06 01:15:18.245828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.829 [2024-12-06 01:15:18.245866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.829 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.246005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.246040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.246203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.246237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.246373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.246407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.246547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.246581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.246775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.246810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.246986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.247035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.247154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.247191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.247355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.247391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.247531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.247565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.247688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.247739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.247921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.247960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.248104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.248141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.248293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.248329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.248466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.248501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.248684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.248721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.248824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.248866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.249002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.249037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.249207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.249240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.249400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.249433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.249542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.249576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.249742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.249777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.249951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.250000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.250151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.250187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.250351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.250386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.250492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.250526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.250639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.250674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.250803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.250857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.250991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.251028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.251188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.251223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.251361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.251395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.251557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.251592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.251705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.251739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.251893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.251928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.252027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.252061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.252206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.252245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.252381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.252415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.252559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.252592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.830 [2024-12-06 01:15:18.252702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.830 [2024-12-06 01:15:18.252736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.830 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.252875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.252910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.253092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.253143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.253276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.253315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.253455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.253491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.253631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.253668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.253833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.253876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.254024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.254059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.254196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.254231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.254396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.254430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.254582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.254630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.254758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.254793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.254971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.255006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.255147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.255181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.255314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.255348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.255485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.255520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.255662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.255698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.255847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.255883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.255990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.256025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.256192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.256226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.256384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.256418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.256585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.256620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.256759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.256794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.256988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.257036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.257179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.257216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.257356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.257391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.257534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.257568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.257752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.257791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.257970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.258004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.258187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.258236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.258410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.258446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.258576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.258611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.258723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.258759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.258893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.258936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.259051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.259085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.259248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.259283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.259410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.259444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.259554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.259595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.831 [2024-12-06 01:15:18.259747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.831 [2024-12-06 01:15:18.259796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.831 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.259938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.259974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.260114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.260149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.260279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.260314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.260414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.260447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.260603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.260654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.260824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.260874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.261026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.261064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.261201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.261237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.261353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.261388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.261550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.261626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.261753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.261801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.261998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.262047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.262229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.262266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.262415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.262450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.262587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.262624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.262756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.262798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.262921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.262958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.263136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.263184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.263352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.263388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.263533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.263568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.263695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.263729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.263893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.263928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.264035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.264069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.264203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.264238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.264372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.264407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.264568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.264618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.264764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.264800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.264946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.264980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.265093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.265127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.265296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.265330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.265492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.265526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.265708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.832 [2024-12-06 01:15:18.265744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.832 qpair failed and we were unable to recover it. 00:37:45.832 [2024-12-06 01:15:18.265918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.265967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.266115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.266153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.266306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.266342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.266485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.266521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.266692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.266727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.266869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.266905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.267094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.267148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.267273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.267311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.267426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.267460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.267598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.267632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.267739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.267773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.267919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.267953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.268124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.268173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.268317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.268354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.268497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.268533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.268675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.268711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.268851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.268886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.269044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.269094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.269262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.269297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.269407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.269442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.269552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.269586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.269752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.269786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.269926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.269960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.270083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.270117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.270251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.270285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.270454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.270488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.270594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.270649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.270794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.270879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.271053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.271091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.271241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.271277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.271440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.271475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.271639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.271673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.271796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.271853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.833 [2024-12-06 01:15:18.272040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.833 [2024-12-06 01:15:18.272075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.833 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.272187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.272222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.272335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.272369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.272511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.272545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.272648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.272682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.272829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.272864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.273032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.273066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.273235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.273269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.273405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.273440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.273559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.273593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.273753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.273788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.273906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.273955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.274137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.274185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.274327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.274368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.274503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.274538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.274674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.274710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.274819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.274854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.274991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.275027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.275225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.275274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.275447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.275496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.275608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.275643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.275783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.275826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.275973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.276008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.276221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.276256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.276417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.276452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.276613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.276648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.276826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.276862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.277015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.277051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.277188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.277223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.277360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.277393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.277506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.277540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.277646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.277681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.277847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.277896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.278039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.278075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.278213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.278247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.278353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.278387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.278554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.278590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.278727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.278762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.834 [2024-12-06 01:15:18.278941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.834 [2024-12-06 01:15:18.278978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.834 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.279121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.279155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.279298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.279332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.279443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.279481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.279626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.279662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.279775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.279809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.279953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.279987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.280123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.280157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.280293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.280326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.280507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.280542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.280718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.280755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.280898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.280933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.281071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.281106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.281242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.281277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.281422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.281457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.281597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.281637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.281779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.281821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.281963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.281999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.282108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.282142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.282307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.282341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.282483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.282517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.282695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.282731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.282873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.282909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.283014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.283047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.283163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.283196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.283325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.283359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.283493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.283526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.283626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.283661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.283836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.283872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.284047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.284083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.284224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.284259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.284402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.284437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.284578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.284613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.284775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.284810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.284924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.284959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.285109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.285144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.285254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.285288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.285430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-12-06 01:15:18.285464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-12-06 01:15:18.285600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.285634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.285774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.285811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.285978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.286026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.286190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.286226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.286369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.286403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.286563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.286597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.286734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.286768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.286959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.286994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.287106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.287140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.287285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.287320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.287420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.287454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.287596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.287629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.287742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.287776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.287907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.287956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.288103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.288138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.288277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.288311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.288450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.288484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.288621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.288661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.288829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.288881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.289017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.289051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.289154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.289189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.289330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.289363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.289498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.289532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.289632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.289665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.289832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.289866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.289975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.290009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.290142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.290177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.290339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.290373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.290516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.290552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.290717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.290751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.290861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.290896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.291034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.291068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.291173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.291207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.291322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-12-06 01:15:18.291356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-12-06 01:15:18.291477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.291511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.291650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.291684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.291823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.291858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.291963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.291997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.292123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.292156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.292299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.292333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.292491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.292524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.292677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.292711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.292847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.292882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.293041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.293074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.293212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.293245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.293419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.293453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.293595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.293632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.293769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.293803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.293918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.293952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.294088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.294122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.294261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.294295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.294417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.294452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.294624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.294669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.294810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.294851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.294991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.295025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.295196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.295231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.295365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.295398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.295558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.295596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.295732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.295766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.295878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.295912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.296051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.296084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.296233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.296269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.296407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.296441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.296578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.296613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.296717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.296751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.296944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.296993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.297115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.297153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.297303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.297337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-12-06 01:15:18.297474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-12-06 01:15:18.297508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.297673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.297706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.297845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.297879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.297993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.298028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.298205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.298239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.298348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.298382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.298543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.298577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.298679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.298713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.298843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.298893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.299046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.299081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.299218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.299252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.299361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.299395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.299560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.299593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.299697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.299731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.299866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.299902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.300052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.300102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.300284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.300334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.300504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.300557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.300744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.300785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.300971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.301010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.301173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.301214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.301399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.301463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.301663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.301721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.301864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.301898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.302046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.302081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.302194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.302248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.302446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.302505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.302643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.302680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.302868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.302918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.303070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.303113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.303257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.303293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.303462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.303522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.303739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.303773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.303941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.303990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.304118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.304155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.304310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.304378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.304591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.304663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-12-06 01:15:18.304831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-12-06 01:15:18.304883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.304997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.305031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.305203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.305239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.305455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.305491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.305658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.305696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.305848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.305900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.306043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.306079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.306240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.306275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.306417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.306453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.306623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.306657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.306825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.306860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.306963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.306997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.307156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.307189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.307311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.307349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.307530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.307593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.307773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.307811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.308006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.308041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.308176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.308211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.308320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.308355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.308517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.308564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.308690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.308725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.308864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.308899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.309036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.309071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.309299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.309362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.309591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.309649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.309770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.309826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.310019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.310054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.310190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.310225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.310360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.310395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.310580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.310633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.310826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.310879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.311013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.311049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.311185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.311224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.311387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.311422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.311568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.311603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.311799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.311864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.312006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.312042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.312194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.312230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-12-06 01:15:18.312385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-12-06 01:15:18.312423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.312565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.312602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.312725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.312775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.312956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.312991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.313124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.313171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.313320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.313357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.313517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.313556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.313701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.313738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.313940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.313990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.314170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.314208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.314453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.314513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.314701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.314737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.314841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.314877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.315068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.315103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.315246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.315293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.315422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.315461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.315615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.315654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.315830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.315882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.316016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.316051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.316158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.316192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.316303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.316337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.316551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.316619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.316796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.316861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.317051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.317098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.317243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.317279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.317463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.317526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.317750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.317809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.317996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.318045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.318218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.318254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.318395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.318431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.318631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.318665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.318840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.318875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.319008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.319058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.319208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.319245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.319409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.319455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.319659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.319695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.319881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.319917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.320080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-12-06 01:15:18.320114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-12-06 01:15:18.320260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.320295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.320433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.320469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.320666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.320727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.320872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.320907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.321045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.321080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.321244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.321278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.321461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.321525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.321708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.321762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.321924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.321959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.322089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.322137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.322295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.322333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.322523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.322589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.322749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.322788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.322994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.323029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.323189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.323224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.323338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.323374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.323510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.323544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.323693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.323730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.323888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.323924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.324060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.324095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.324204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.324238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.324340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.324375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.324539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.324574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.324739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.324785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.324975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.325024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.325188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.325236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.325377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.325413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.325665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.325723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.325890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.325925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.326060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.326094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.326230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.326264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.326400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.326433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.326561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.326595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.326748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-12-06 01:15:18.326786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-12-06 01:15:18.326974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.327023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.327254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.327291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.327452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.327499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.327685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.327725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.327893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.327929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.328144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.328179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.328343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.328378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.328516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.328549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.328721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.328755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.328924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.328958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.329066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.329100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.329234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.329267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.329375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.329408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.329542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.329575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.329737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.329769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.329888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.329922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.330065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.330100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.330236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.330270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.330406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.330440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.330550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.330584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.330707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.330744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.330930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.330964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.331073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.331107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.331259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.331296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.331411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.331447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.331634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.331671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.331860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.331894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.332034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.332068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.332199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.332232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.332409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.332476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.332697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.332755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.332956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.332998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.333180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.333214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.333324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.333358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.333491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.333525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.333680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.333718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.333875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.333909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.334037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-12-06 01:15:18.334086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-12-06 01:15:18.334265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.334302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.334415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.334461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.334573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.334609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.334764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.334802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.334972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.335011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.335146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.335182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.335355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.335388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.335502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.335562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.335709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.335746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.335914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.335948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.336076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.336109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.336216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.336249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.336427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.336465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.336577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.336613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.336765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.336802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.336978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.337013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.337123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.337156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.337286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.337319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.337473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.337510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.337632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.337670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.337824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.337878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.338010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.338043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.338176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.338210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.338340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.338389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.338523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.338563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.338753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.338792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.338996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.339030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.339168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.339202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.339321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.339355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.339495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.339530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.339681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.339718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.339878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.339916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.340053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.340087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.340215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.340249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.340350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.340384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.340498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.340533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.340693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.340732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-12-06 01:15:18.340896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-12-06 01:15:18.340934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.341085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.341119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.341281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.341315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.341449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.341484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.341616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.341669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.341898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.341932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.342094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.342128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.342264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.342297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.342442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.342476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.342655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.342693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.342841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.342892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.343027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.343060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.343193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.343227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.343364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.343415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.343630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.343667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.343810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.343854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.344002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.344035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.344169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.344202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.344335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.344368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.344531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.344564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.344684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.344721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.344881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.344915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.345050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.345083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.345218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.345251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.345353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.345386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.345543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.345612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.345806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.345851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.345993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.346028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.346173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.346209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.346398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.346437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.346606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.346644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.346755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.346793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.346953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.346987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.347146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.347179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.347335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.347373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.347534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.347567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.347676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.347709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.347829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.347866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-12-06 01:15:18.347971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-12-06 01:15:18.348005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.348141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.348175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.348281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.348315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.348452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.348486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.348646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.348680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.348787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.348830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.348994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.349028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.349126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.349160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.349293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.349327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.349438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.349473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.349617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.349651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.349788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.349826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.349934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.349968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.350080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.350113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.350218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.350252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.350411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.350445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.350576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.350610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.350775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.350811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.350960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.350995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.351135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.351169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.351305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.351338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.351481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.351515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.351657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.351691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.351805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.351847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.351984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.352019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.352152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.352186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.352294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.352328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.352461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.352495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.352653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.352686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.352822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.352856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.352964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.352998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-12-06 01:15:18.353154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-12-06 01:15:18.353188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-12-06 01:15:18.353320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-12-06 01:15:18.353355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-12-06 01:15:18.353494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-12-06 01:15:18.353528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-12-06 01:15:18.353662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-12-06 01:15:18.353697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-12-06 01:15:18.353846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-12-06 01:15:18.353881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-12-06 01:15:18.354010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-12-06 01:15:18.354049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-12-06 01:15:18.354192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-12-06 01:15:18.354226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-12-06 01:15:18.354366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-12-06 01:15:18.354401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-12-06 01:15:18.354562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-12-06 01:15:18.354596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-12-06 01:15:18.354701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-12-06 01:15:18.354734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-12-06 01:15:18.354870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-12-06 01:15:18.354905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-12-06 01:15:18.355046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-12-06 01:15:18.355079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-12-06 01:15:18.355214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-12-06 01:15:18.355247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-12-06 01:15:18.355388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-12-06 01:15:18.355423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-12-06 01:15:18.355574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-12-06 01:15:18.355608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-12-06 01:15:18.355775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-12-06 01:15:18.355808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-12-06 01:15:18.355933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-12-06 01:15:18.355967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-12-06 01:15:18.356102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-12-06 01:15:18.356135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-12-06 01:15:18.356235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-12-06 01:15:18.356275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-12-06 01:15:18.356441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-12-06 01:15:18.356477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-12-06 01:15:18.356586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-12-06 01:15:18.356619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.356753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.356787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.356952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.356986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.357091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.357125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.357268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.357302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.357411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.357443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.357610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.357643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.357751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.357784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.357954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.357991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.358128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.358162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.358309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.358343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.358483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.358517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.358657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.358692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.358804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.358848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.358985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.359019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.359158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.359192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.359307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.359341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.359455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.359488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.359661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.359699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.359842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.359892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.360041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.360076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.360212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.360246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.360405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.360438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.360572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.360607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.360719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.360753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.360880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.360919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.361028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.361063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.361201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.361234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.361372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.361405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.361541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.361574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.361721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.361755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.361896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.361930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.362041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.362075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.362182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.362217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.362373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.362407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.362546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.362580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.362754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.362788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.362956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.363005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.363163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.363202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.363351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.363387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.363518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.363552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.363717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.363752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.363902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.363939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.364091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.364127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.364233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.364267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.364382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.364415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.364550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.364583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.364720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-12-06 01:15:18.364754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-12-06 01:15:18.364922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.364955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.365107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.365144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.365250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.365285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.365421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.365457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.365601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.365635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.365786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.365841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.365988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.366025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.366174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.366210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.366350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.366398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.366509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.366544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.366651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.366685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.366836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.366884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.367027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.367062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.367226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.367261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.367375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.367408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.367568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.367602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.367767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.367801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.367913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.367953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.368091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.368124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.368258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.368292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.368430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.368465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.368576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.368610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.368773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.368832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.369026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.369061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.369198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.369232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.369379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.369414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.369526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.369561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.369737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.369774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.369985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.370033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.370178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.370214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.370379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.370413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.370559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.370593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.370706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.370740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.370905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.370940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.371084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.371118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.371232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.371266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.371408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.371442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.371556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.371589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.371749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.371783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.371926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.371959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.372068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.372102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.372234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.372268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.372432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.372472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.372610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.372646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.372782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.372824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.372991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.373025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.373161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.373195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.373356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.373390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.373502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.373536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.373683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.373718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.373902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.373953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.374102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.374140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.374304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.374340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.374513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.374549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.374687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.374722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.374857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.374893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.375039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.375088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.375258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.375300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.375440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.375474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.375610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.375644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.375780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.375820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.376005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.376055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.376169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.376205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.376348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.376382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.376548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.376582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.376746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.376780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.376918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.376953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.377090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.377125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.377223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.377257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.377360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.377393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.377504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.377538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.377676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.377709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.377872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-12-06 01:15:18.377907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-12-06 01:15:18.378008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.378042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.378181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.378215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.378352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.378385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.378547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.378581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.378727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.378767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.378924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.378960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.379091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.379126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.379265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.379300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.379436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.379471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.379627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.379676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.379824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.379858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.379980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.380014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.380149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.380183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.380293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.380327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.380454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.380487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.380596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.380633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.380784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.380847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.380997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.381033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.381174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.381208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.381370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.381404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.381535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.381569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.381732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.381767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.381886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.381923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.382061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.382096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.382225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.382266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.382415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.382450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.382557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.382592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.382755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.382796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.382925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.382973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.383117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.383153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.383292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.383326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.383438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.383474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.383645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.383696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.383907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.383943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.384113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.384149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.384259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.384293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.384469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.384504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.384614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.384648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.384789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.384829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.384948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.384983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.385120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.385155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.385286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.385320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.385452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.385485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.385646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.385680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.385841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.385875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.386037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.386073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.386236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.386270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.386416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.386450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.386578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.386612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.386721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.386756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.386922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.386956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.387065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.387101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.387234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.387268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.387398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.387431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.387567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.387601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.387731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.387765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.387902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.387936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.388031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.388065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.388158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.388191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.388325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.388358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.388502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.388538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.388675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.388709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.388864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.388900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.389034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.389082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.389221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.389261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.389400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.389435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.389601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-12-06 01:15:18.389636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-12-06 01:15:18.389741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.389774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.389917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.389952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.390089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.390123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.390256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.390290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.390400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.390433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.390567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.390600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.390711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.390744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.390875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.390909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.391056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.391092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.391242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.391290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.391441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.391478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.391593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.391627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.391758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.391793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.391939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.391974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.392093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.392127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.392324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.392359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.392465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.392498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.392629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.392663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.392803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.392844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.392992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.393026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.393189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.393225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.393360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.393394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.393529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.393563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.393695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.393729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.393887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.393936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.394110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.394146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.394287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.394324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.394467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.394503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.394644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.394679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.394820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.394855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.394958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.394993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.395135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.395170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.395307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.395342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.395482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.395518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.395676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.395714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.395889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.395924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.396059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.396093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.396194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.396234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.396345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.396378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.396507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.396541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.396676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.396710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.396869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.396903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.397041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.397083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.397223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.397257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.397423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.397457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.397595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.397631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.397773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.397808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.397992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.398041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.398176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.398212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.398346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.398380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.398516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.398550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.398695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.398731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.398874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.398909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.399046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.399081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.399197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.399231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.399368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.399402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.399538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.399573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.399711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.399746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.399852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.399887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.400029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.400077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.400221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.400258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.400423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.400458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.400625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.400660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.400791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.400835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.401004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.401050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.401197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.401244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.401415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.401451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.401617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.401651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.401789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.401832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.402022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.402071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.402191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.402227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.402452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.402515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.402641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-12-06 01:15:18.402679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-12-06 01:15:18.402829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.402881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.403040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.403074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.403222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.403257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.403387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.403432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.403614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.403656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.403805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.403866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.404003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.404052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.404179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.404218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.404353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.404389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.404534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.404569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.404698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.404737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.404899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.404947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.405116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.405151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.405296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.405330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.405440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.405492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.405667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.405705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.405833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.405867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.406036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.406069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.406237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.406270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.406425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.406463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.406643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.406683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.406875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.406911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.407055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.407090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.407226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.407262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.407404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.407438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.407653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.407691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.407823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.407876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.408014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.408048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.408209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.408243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.408455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.408490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.408704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.408738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.408939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.408985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.409170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.409206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.409343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.409378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.409540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.409613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.409866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.409901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.410053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.410101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.410250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.410286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.410415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.410455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.410624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.410682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.410804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.410866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.411009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.411044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.411180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.411213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.411345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.411379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.411520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.411563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.411693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-12-06 01:15:18.411748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-12-06 01:15:18.411908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.411956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.412066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.412104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.412269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.412305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.412474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.412509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.412659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.412698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.412866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.412900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.413008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.413042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.413148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.413182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.413380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.413439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.413657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.413716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.413863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.413897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.414008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.414042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.414188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.414222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.414370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.414409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.414605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.414646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.414830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.414888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.414999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.415035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.415209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.415243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.415383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.415417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.415574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.415612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.415763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.415801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.415967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.416001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.416110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.416143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.416280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.416315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.416431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.416464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.416622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.416659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.416808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.416872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.417045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.417080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.417242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.417277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.417418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.417452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.417608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.417646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.417809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.417848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.417960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-12-06 01:15:18.417995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-12-06 01:15:18.418129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.418163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.418294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.418328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.418466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.418517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.418664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.418703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.418885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.418919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.419053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.419092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.419269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.419302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.419425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.419475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.419662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.419699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.419849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.419902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.420004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.420038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.420182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.420216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.420382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.420416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.420561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.420599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.420775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.420812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.420997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.421045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.421160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.421197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.421343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.421377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.421493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.421529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.421686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.421724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.421850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.421901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.422043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.422076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.422178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.422212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.422339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.422373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.422513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.422547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.422684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.422717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.422833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.422868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.423002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.423035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.423198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.423232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.423366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.423399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.423534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.423568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.423728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.423761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.423879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.423928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-12-06 01:15:18.424053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-12-06 01:15:18.424102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.424231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.424267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.424413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.424448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.424561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.424597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.424735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.424769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.424910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.424946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.425121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.425160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.425293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.425329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.425494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.425528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.425694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.425734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.425841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.425876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.425975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.426009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.426173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.426212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.426350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.426384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.426485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.426519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.426621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.426654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.426812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.426854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.426966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.426999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.427135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.427170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.427308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.427341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.427509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.427543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.427670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.427711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.427847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.427884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.428021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.428056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.428215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.428249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.428385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.428420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.428573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.428622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.428764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.428797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.428920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.428955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.429119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.429153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.429284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.429318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.429419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.429453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.429582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.429616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.429765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.429822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.429937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-12-06 01:15:18.429975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-12-06 01:15:18.430136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.430170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.430283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.430318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.430460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.430495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.430638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.430676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.430832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.430887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.431031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.431067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.431208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.431242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.431353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.431387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.431522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.431555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.431718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.431752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.431901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.431937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.432078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.432113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.432247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.432282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.432435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.432471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.432609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.432654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.432755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.432789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.432907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.432942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.433082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.433116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.433247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.433280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.433417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.433452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.433565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.433600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.433711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.433748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.433886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.433923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.434093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.434127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.434237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.434271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.434487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.434523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.434683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.434717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.434833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.434869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.434986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.435020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.435183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.435217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.435326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.435360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.435504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.435538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.435672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.435706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.435813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.435853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.435958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.435993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.436156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.436190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.436352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.436386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-12-06 01:15:18.436527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-12-06 01:15:18.436562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.436668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.436701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.436866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.436900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.437035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.437068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.437174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.437208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.437322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.437356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.437493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.437527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.437652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.437695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.437891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.437925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.438028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.438070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.438207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.438241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.438368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.438410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.438586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.438621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.438759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.438793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.438948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.438982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.439149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.439200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.439347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.439384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.439550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.439586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.439763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.439798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.439988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.440022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.440165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.440200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.440343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.440377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.440536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.440570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.440673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.440707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.440854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.440899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.441007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.441041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.441149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.441184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.441297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.441331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.441493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.441527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.441683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.441717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.441856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.441891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.442024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.442067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.442199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.442232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.442402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.442436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.442604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.442637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.442766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.442799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-12-06 01:15:18.442941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-12-06 01:15:18.442975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.443083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.443118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.443278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.443311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.443473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.443507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.443612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.443647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.443804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.443869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.444063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.444119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.444290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.444326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.444438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.444474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.444610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.444645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.444792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.444836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.445008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.445048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.445182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.445217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.445377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.445413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.445526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.445561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.445728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.445763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.445904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.445939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.446074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.446108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.446248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.446282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.446419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.446454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.446586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.446639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.446798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.446877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.447025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.447060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.447190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.447224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.447365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.447399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.447518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.447552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.447669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.447703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.447875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.447910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.448059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.448095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.448260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.448294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.448429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.448463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.448675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.448709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.448827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-12-06 01:15:18.448874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-12-06 01:15:18.449024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.449059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.449194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.449229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.449375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.449410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.449556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.449592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.449732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.449766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.449913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.449949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.450121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.450155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.450291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.450326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.450490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.450523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.450641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.450677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.450821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.450857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.451014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.451063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.451236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.451272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.451409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.451443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.451580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.451614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.451725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.451761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.451903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.451937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.452047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.452088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.452255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.452294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.452428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.452462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.452601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.452634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.452778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.452811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.452946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.452980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.453106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.453155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.453297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.453334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.453481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.453518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.453677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.453712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.453844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.453885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.454043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.454077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.454242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.454277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.454384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.454420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.454558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-12-06 01:15:18.454592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-12-06 01:15:18.454734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.454769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.454919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.454954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.455090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.455124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.455263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.455297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.455465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.455499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.455628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.455661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.455832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.455892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.456049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.456085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.456197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.456232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.456367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.456401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.456506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.456541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.456712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.456746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.456973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.457008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.457193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.457241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.457386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.457421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.457558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.457593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.457755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.457789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.457973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.458019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.458221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.458270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.458414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.458451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.458556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.458591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.458839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.458904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.459033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.459082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.459263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.459301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.459420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.459460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.459604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.459643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.459799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.459853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.459973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.460008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.460150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.460216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.460357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.460392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.460515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.460564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.460751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.460789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.460977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.461011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.461123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.461177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.461322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-12-06 01:15:18.461360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-12-06 01:15:18.461514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.461552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.461730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.461768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.461914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.461948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.462111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.462145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.462318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.462379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.462527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.462564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.462716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.462754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.462925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.462959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.463070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.463104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.463210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.463244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.463404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.463442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.463558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.463595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.463733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.463771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.463916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.463959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.464068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.464102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.464207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.464259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.464485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.464523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.464699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.464736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.464909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.464948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.465068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.465102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.465238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.465272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.465405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.465439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.465589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.465627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.465777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.465823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.466027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.466075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.466245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.466281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.466388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.466425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.466572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.466605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.466727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.466765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.466923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.466966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.467099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.467135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.467297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.467335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.467470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.467523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.467698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.467735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-12-06 01:15:18.467873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-12-06 01:15:18.467908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.468049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.468084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.468222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.468257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.468409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.468446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.468598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.468637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.468820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.468876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.468995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.469030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.469192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.469226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.469369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.469404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.469526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.469564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.469685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.469724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.469922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.469978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.470138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.470175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.470329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.470368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.470544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.470582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.470759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.470793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.470991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.471040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.471176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.471211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.471428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.471491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.471611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.471649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.471802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.471871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.472009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.472043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.472203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.472238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.472373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.472406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.472576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.472613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.472799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.472886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.473051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.473101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.473267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.473307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.473493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.473532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.473649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.473687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.473845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.473888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.474054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.474088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.474243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.474282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.474434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.474469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-12-06 01:15:18.474637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-12-06 01:15:18.474675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.474805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.474846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.474993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.475027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.475173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.475213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.475378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.475416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.475528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.475566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.475691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.475729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.475899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.475934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.476070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.476103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.476299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.476334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.476448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.476482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.476630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.476668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.476769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.476807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.476986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.477021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.477156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.477191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.477300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.477334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.477474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.477508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.477621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.477654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.477806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.477847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.477987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.478032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.478136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.478170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.478313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.478347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.478486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.478520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.478654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.478687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.478791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.478833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.478951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.478985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.479092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.479126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.479236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.479270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.479416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.479450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.479585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.479619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.479770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.479805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.479983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.480016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.480121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.480155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.480287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.480321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.480423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.480456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.480595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.480629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.480746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.480779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.480904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.480938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-12-06 01:15:18.481068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-12-06 01:15:18.481103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.481248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.481282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.481419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.481454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.481559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.481593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.481702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.481736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.481847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.481897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.482034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.482068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.482231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.482265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.482401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.482435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.482597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.482631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.482744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.482778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.482921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.482956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.483096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.483131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.483293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.483327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.483461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.483495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.483627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.483666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.483778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.483833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.483971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.484005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.484120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.484154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.484296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.484330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.484471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.484505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.484650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.484684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.484830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.484875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.484994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.485028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.485164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.485197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.485332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.485372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.485489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.485523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.485632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.485665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.485772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.485806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.485936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.485970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.486105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.486139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.486301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.486335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.486474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.486508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.486647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.486681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.486811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.486851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.487025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.487075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.487224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.487272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.487413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.487448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.487562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.487597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.487727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.487766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.487961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.488009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.488158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.488194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.488333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.488367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.488531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.488565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.488679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.488712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.488828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.488869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.488989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.489026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.489183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.489231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.489401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.489437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.489553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.489589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-12-06 01:15:18.489724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-12-06 01:15:18.489759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.489880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.489915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.490080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.490115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.490277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.490311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.490419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.490452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.490566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.490599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.490721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.490755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.490865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.490910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.491021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.491055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.491172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.491207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.491311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.491344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.491451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.491485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.491620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.491654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.491784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.491831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.491960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.492008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.492127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.492165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.492303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.492337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.492444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.492479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.492707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.492756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.492877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.492915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.493043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.493078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.493211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.493246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.493415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.493461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.493643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.493697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.493891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.493940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.494076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.494151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.494305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.494372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.494560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.494620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.494759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.494796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.495018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.495053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.495204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.495243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.495420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.495459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.495608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.495647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.495779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.495833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.495950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.495986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.496146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.496190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.496338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.496376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.496489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.496527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.496653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.496687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.496802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.496846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.497013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.497061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.497252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.497293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.497469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.497526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.497688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.497722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.497832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.497866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.497976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.498011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.498175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.498223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.498414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.498476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.498613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.498678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.498794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.498836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.498975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.499009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-12-06 01:15:18.499155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-12-06 01:15:18.499190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.499306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.499340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.499469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.499507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.499650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.499687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.499801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.499852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.499989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.500024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.500184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.500218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.500350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.500384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.500527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.500561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.500676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.500711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.500844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.500900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.501030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.501067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.501170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.501204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.501335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.501369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.501478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.501514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.501626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.501662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.501823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.501873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.501989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.502027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.502166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.502202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.502345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.502380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.502493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.502528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.502665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.502701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.502862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.502898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.503008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.503042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.503156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.503194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.503304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.503339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.503503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.503536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.503647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.503681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.503785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.503826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.503942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.503976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.504095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.504131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.504261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.504296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.504500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.504561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.504719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.504753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.504895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.504932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.505073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.505107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.505209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.505243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.505377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.505410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.505550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.505584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.505763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.505826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.505968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.506004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.506143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.506179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.506351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.506385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.506527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.506562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.506678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.506714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.506853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.506889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.507055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.507089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.507200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.507235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.507382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.507417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.507548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.507581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-12-06 01:15:18.507713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-12-06 01:15:18.507747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.507886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.507923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.508073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.508122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.508271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.508307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.508436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.508473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.508615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.508649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.508766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.508801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.508921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.508956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.509061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.509095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.509254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.509288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.509413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.509447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.509585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.509619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.509773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.509812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.509953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.509988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.510130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.510184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.510310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.510348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.510463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.510499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.510638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.510673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.510824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.510860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.510967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.511002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.511135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.511169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.511280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.511315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.511449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.511484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.511600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.511634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.511749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.511784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.511932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.511967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.512095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.512144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.512292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.512329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.512444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.512480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.512591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.512625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.512758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.512810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.512943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.512978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.513083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.513117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.513216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-12-06 01:15:18.513249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-12-06 01:15:18.513370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.149 [2024-12-06 01:15:18.513404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.149 qpair failed and we were unable to recover it. 00:37:46.149 [2024-12-06 01:15:18.513519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.149 [2024-12-06 01:15:18.513555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.149 qpair failed and we were unable to recover it. 00:37:46.149 [2024-12-06 01:15:18.513678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.149 [2024-12-06 01:15:18.513713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.149 qpair failed and we were unable to recover it. 00:37:46.149 [2024-12-06 01:15:18.513866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.149 [2024-12-06 01:15:18.513915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.149 qpair failed and we were unable to recover it. 00:37:46.149 [2024-12-06 01:15:18.514031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.149 [2024-12-06 01:15:18.514067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.149 qpair failed and we were unable to recover it. 00:37:46.149 [2024-12-06 01:15:18.514177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.149 [2024-12-06 01:15:18.514212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.149 qpair failed and we were unable to recover it. 00:37:46.149 [2024-12-06 01:15:18.514319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.514354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.514489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.514525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.514635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.514669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.514778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.514811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.514927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.514962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.515068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.515102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.515236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.515270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.515408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.515442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.515570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.515604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.515730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.515779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.515924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.515961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.516071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.516105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.516242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.516276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.516385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.516419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.516530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.516570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.516677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.516712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.516831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.516872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.517025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.517060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.517176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.517212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.517320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.517355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.517484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.517519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.517674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.517709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.517825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.517861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.517964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.517999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.518128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.518176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.518327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.518364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.518492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.518526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.518642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.518678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.518811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.518856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.518965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.519000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.519112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.519147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.519262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.519297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.150 [2024-12-06 01:15:18.519434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.150 [2024-12-06 01:15:18.519468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.150 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.519589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.519630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.519803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.519863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.520008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.520044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.520163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.520199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.520344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.520378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.520487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.520533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.520673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.520709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.520885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.520934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.521053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.521105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.521245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.521281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.521393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.521429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.521547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.521582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.521721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.521757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.521895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.521942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.522088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.522124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.522261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.522296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.522431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.522466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.522578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.522612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.522765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.522823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.522944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.522980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.523097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.523132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.523276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.523310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.523421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.523455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.523591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.523625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.523778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.523838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.524016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.524065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.524181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.524217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.524369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.524407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.524557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.524593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.524703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.524738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.524848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.524884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.525038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.525086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.525209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.525247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.525395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.525431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.525535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.525570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.525722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.525756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.525865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.151 [2024-12-06 01:15:18.525899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.151 qpair failed and we were unable to recover it. 00:37:46.151 [2024-12-06 01:15:18.526005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.526039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.526205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.526238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.526369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.526402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.526572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.526608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.526736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.526774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.526933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.526980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.527100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.527139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.527253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.527288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.527403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.527437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.527546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.527581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.527693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.527728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.527876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.527917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.528029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.528063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.528206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.528241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.528350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.528386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.528556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.528592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.528704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.528738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.528849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.528884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.529001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.529035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.529147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.529181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.529287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.529322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.529489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.529525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.529681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.529721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.529862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.529899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.530066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.530100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.530212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.530246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.530352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.530386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.530505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.530540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.530675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.530710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.530829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.530864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.531027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.531061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.531171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.531204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.531311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.531345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.531508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.531544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.531650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.152 [2024-12-06 01:15:18.531685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.152 qpair failed and we were unable to recover it. 00:37:46.152 [2024-12-06 01:15:18.531793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.531835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.531951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.531985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.532120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.532153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.532264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.532298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.532408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.532442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.532573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.532608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.532739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.532788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.532912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.532948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.533096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.533130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.533233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.533267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.533401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.533435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.533583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.533620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.533768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.533805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.533974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.534023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.534147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.534186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.534296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.534332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.534473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.534513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.534654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.534689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.534836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.534884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.534997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.535033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.535144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.535178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.535339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.535373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.535477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.535510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.535622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.535657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.535774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.535810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.535995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.536043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.536189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.536226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.536347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.536383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.536516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.536550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.536698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.536733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.536855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.536891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.537023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.537072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.537201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.537238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.537376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.537411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.537524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.153 [2024-12-06 01:15:18.537559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.153 qpair failed and we were unable to recover it. 00:37:46.153 [2024-12-06 01:15:18.537677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.537712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.537833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.537869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.537978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.538012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.538122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.538155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.538265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.538300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.538435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.538469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.538586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.538620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.538747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.538781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.538924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.538973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.539118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.539155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.539298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.539332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.539481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.539516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.539668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.539704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.539831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.539884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.539991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.540025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.540232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.540267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.540379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.540413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.540526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.540561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.540701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.540734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.540880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.540929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.541100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.541136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.541245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.541286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.541394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.541429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.541606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.541641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.541752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.541788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.541931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.541967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.542117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.542165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.542318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.542355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.542462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.542498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.542611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.542652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.542767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.542802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.542917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.154 [2024-12-06 01:15:18.542951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.154 qpair failed and we were unable to recover it. 00:37:46.154 [2024-12-06 01:15:18.543081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.543115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.543246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.543280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.543421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.543457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.543593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.543631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.543761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.543799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.543928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.543962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.544122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.544170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.544320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.544356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.544474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.544508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.544618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.544654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.544794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.544835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.544957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.544992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.545130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.545165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.545331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.545365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.545476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.545510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.545647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.545681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.545795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.545843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.545986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.546024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.546214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.546262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.546405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.546442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.546586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.546622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.546731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.546766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.546922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.546957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.547068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.547103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.547247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.547283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.547431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.547466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.547572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.547606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.547722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.547755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.547875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.547910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.548028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.548067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.548173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.548207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.548331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.548365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.548522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.548567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.548676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.548712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.548860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.155 [2024-12-06 01:15:18.548895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.155 qpair failed and we were unable to recover it. 00:37:46.155 [2024-12-06 01:15:18.548997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.549031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.549143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.549178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.549338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.549373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.549533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.549567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.549677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.549711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.549851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.549886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.549996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.550030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.550148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.550195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.550346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.550383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.550487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.550521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.550683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.550719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.550855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.550890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.551006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.551040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.551180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.551215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.551324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.551358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.551468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.551503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.551638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.551672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.551843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.551879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.552030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.552077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.552208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.552244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.552382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.552418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.552536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.552570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.552677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.552711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.552845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.552881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.552992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.553027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.553162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.553196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.553323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.553357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.553466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.553500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.553604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.553638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.553773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.553807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.553988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.156 [2024-12-06 01:15:18.554022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.156 qpair failed and we were unable to recover it. 00:37:46.156 [2024-12-06 01:15:18.554136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.554171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.554284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.554317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.554434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.554468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.554564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.554602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.554762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.554795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.554911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.554946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.555076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.555109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.555218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.555252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.555363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.555396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.555530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.555564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.555666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.555700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.555837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.555872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.555989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.556024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.556172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.556207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.556312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.556347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.556482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.556516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.556647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.556681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.556857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.556891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.557031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.557066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.557173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.557207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.557373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.557407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.557522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.557556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.557688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.557722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.557828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.557862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.557980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.558014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.558151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.558186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.558323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.558357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.558497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.558531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.558640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.558675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.558840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.558875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.559017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.559050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.559184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.559218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.559331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.559365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.559475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.559509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.559653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.559687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.559796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.559839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.559951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.559984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.560095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.560129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.560272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.560306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.560417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.560451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.560563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.560596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.560732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.560766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.560917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.560951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.561107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.561162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.561308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.561356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.561501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.561537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.561675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.561710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.157 [2024-12-06 01:15:18.561826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.157 [2024-12-06 01:15:18.561863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.157 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.561982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.562016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.562148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.562182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.562346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.562380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.562521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.562555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.562693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.562747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.562912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.562947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.563077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.563111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.563244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.563278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.563417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.563450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.563578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.563613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.563773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.563807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.563925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.563960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.564095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.564129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.564260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.564294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.564401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.564435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.564552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.564601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.564719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.564756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.564906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.564942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.565054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.565089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.565198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.565233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.565344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.565379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.565486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.565520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.565646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.565684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.565841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.565898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.566039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.566072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.566185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.566220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.566348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.566382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.566522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.566557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.566717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.566751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.566882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.566918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.567048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.567081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.567219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.567252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.567363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.567396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.567528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.567562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.567665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.567698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.567809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.567853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.567998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.568047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.568225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.568263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.568417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.568453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.568620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.568671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.568799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.568863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.568979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.569015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.569147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.569181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.569295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.569329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.569477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.569511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.569648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.569682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.569825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.569859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.569984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.570032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.570151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.570188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.570308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.570345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.570460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.570494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.570615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.570650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.570822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.570858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.570994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.571029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.571160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.571195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.571299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.571334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.571471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.571505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.571614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.571648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.571790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.571833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.571970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.572004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.572110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.572145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.572279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.572314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.572446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.572493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.158 qpair failed and we were unable to recover it. 00:37:46.158 [2024-12-06 01:15:18.572689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.158 [2024-12-06 01:15:18.572736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.572897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.572934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.573059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.573097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.573221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.573259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.573387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.573421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.573587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.573625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.573775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.573810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.573955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.574008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.574142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.574181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.574370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.574408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.574536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.574574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.574716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.574754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.574897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.574938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.575071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.575109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.575264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.575303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.575428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.575466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.575620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.575658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.575824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.575859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.575975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.576009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.576122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.576172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.576326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.576364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.576535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.576572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.576731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.576766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.576885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.576921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.577054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.577092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.577218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.577255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.577407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.577444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.577625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.577682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.577799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.577860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.577994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.578032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.578189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.578223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.578358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.578393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.578509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.578544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.578677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.578712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.578851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.578886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.578998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.579031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.579190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.579224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.579334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.579368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.579507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.579542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.579674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.579719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.579916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.579951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.580083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.580117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.580275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.580309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.580444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.580478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.580590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.580623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.580766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.580801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.580916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.580951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.581105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.581154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.581268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.581305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.581470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.581505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.581619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.581655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.581824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.581859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.581992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.582033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.582164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.582199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.582313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.582348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.582453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.582489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.582596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.582630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.582760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.582795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.582928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.582964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.583071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.583105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.583235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.583270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.583406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.583441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.583575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.583610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.583715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.583750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.583912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.583947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.584057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.584091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.584235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.584269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.584408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.584442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.584544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.584579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.159 [2024-12-06 01:15:18.584744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.159 [2024-12-06 01:15:18.584778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.159 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.584929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.584964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.585110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.585144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.585252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.585287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.585447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.585482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.585608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.585646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.585806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.585867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.585971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.586005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.586144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.586179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.586285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.586319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.586455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.586492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.586631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.586666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.586771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.586805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.586957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.586992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.587091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.587125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.587262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.587297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.587432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.587466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.587577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.587612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.587718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.587752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.587889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.587923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.588054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.588088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.588228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.588263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.588382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.588417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.588525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.588564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.588672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.588706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.588823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.588858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.588993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.589028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.589161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.589194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.589297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.589331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.589476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.589510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.589647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.589681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.589793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.589836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.589947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.589981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.590097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.590131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.590267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.590302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.590439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.590474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.590585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.590619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.590748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.590785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.590998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.591046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.591196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.591245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.591385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.591421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.591560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.591595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.591731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.591765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.591916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.591952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.592056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.592090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.592229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.592263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.592427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.592462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.592573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.592606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.592718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.592754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.592872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.592908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.593049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.593086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.593200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.593235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.593348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.593382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.593517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.593551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.593699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.593733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.593849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.593883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.594017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.594051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.594151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.594185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.594304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.594338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.594449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.594484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.594622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.594656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.594794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.594833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.594943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.594978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.595114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.595153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.595303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.595336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.595474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.595508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.595621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.595655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.595822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.595856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.595966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.596000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.596118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.596153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.596309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.596343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.596444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.596478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.160 [2024-12-06 01:15:18.596635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.160 [2024-12-06 01:15:18.596669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.160 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.596801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.596844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.596997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.597045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.597166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.597203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.597368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.597404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.597543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.597577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.597712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.597746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.597855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.597890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.598026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.598061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.598201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.598237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.598374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.598409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.598553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.598588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.598753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.598788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.598862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:37:46.161 [2024-12-06 01:15:18.599038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.599074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.599194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.599229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.599328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.599363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.599512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.599547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.599692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.599736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.599885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.599921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.600028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.600063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.600208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.600242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.600375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.600409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.600569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.600603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.600704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.600738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.600872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.600906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.601011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.601045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.601172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.601206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.601315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.601350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.601494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.601530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.601681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.601715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.601849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.601885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.602008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.602043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.602178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.602213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.602322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.602356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.602485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.602520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.602655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.602690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.602828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.602862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.602965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.603001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.603144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.603177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.603311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.603346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.603446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.603480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.603585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.603619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.603754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.603789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.603904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.603939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.604104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.604143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.604246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.604280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.604423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.604457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.604620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.604654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.604800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.604841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.604977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.605012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.605147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.605182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.605344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.605378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.605521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.605556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.605668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.605703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.605820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.605856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.605972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.606008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.606135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.606185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.606315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.606351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.606488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.606523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.606634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.606670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.606806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.606852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.606955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.606990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.607127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.607161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.607272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.607307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.161 [2024-12-06 01:15:18.607443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.161 [2024-12-06 01:15:18.607478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.161 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.607616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.607651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.607760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.607794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.607930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.607979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.608123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.608159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.608280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.608315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.608478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.608512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.608663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.608697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.608806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.608847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.608986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.609021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.609128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.609162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.609270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.609305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.609417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.609450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.609563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.609597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.609737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.609771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.609890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.609926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.610062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.610097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.610219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.610253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.610386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.610421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.610552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.610587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.610716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.610771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.610948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.610982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.611110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.611144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.611246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.611281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.611384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.611420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.611557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.611591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.611700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.611734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.611840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.611875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.611978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.612012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.612140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.612175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.612317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.612351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.612459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.612493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.612632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.612666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.612801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.612842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.612960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.612995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.613105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.613139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.613277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.613314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.613448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.613483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.613632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.613667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.613804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.613857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.614017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.614065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.614192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.614228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.614340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.614375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.614490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.614525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.614682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.614720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.614871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.614905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.615008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.615042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.615177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.615212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.615374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.615408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.615547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.615581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.615685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.615720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.615856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.615905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.616022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.616058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.616166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.616201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.616306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.616340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.616479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.616512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.616648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.616683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.616826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.616862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.616995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.617029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.617133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.617167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.617319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.617359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.617509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.617558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.617677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.617714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.617868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.617904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.618044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.618079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.618216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.618250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.618378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.618411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.618563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.618599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.618731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.618769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.618908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.618943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.619081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.619116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.619277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.619311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.619422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.162 [2024-12-06 01:15:18.619456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.162 qpair failed and we were unable to recover it. 00:37:46.162 [2024-12-06 01:15:18.619618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.619653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.619801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.619842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.619953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.619986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.620188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.620222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.620348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.620382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.620513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.620547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.620654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.620690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.620792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.620832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.620994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.621043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.621188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.621225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.621326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.621361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.621501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.621534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.621684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.621733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.621890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.621928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.622033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.622069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.622205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.622239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.622376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.622411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.622578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.622612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.622718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.622753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.622926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.622962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.623098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.623132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.623306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.623340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.623449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.623483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.623599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.623633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.623765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.623799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.623917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.623950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.624063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.624097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.624203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.624242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.624376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.624410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.624521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.624556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.624681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.624729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.624875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.624912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.625058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.625107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.625251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.625286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.625400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.625434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.625573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.625606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.625749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.625784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.625903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.625969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.626097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.626146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.626293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.626330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.626446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.626482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.626657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.626692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.626850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.626900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.627043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.627081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.627220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.627255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.627398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.627431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.627572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.627607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.627741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.627776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.627905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.627940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.628070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.628103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.628237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.628271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.628383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.628416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.628554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.628588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.628736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.628785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.628947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.628984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.629110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.629145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.629305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.629339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.629450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.629484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.629601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.629635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.629747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.629783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.629939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.629988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.630134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.630172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.630288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.630324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.630437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.630471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.630606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.630639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.630745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.630779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.630920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.630969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.631082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.631123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.631266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.163 [2024-12-06 01:15:18.631302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.163 qpair failed and we were unable to recover it. 00:37:46.163 [2024-12-06 01:15:18.631430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.631464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.631595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.631629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.631799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.631864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.631997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.632031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.632195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.632229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.632379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.632415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.632531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.632568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.632671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.632705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.632823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.632859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.632996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.633030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.633170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.633204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.633337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.633370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.633486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.633520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.633628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.633662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.633761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.633794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.633933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.633967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.634070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.634104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.634244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.634279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.634389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.634425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.634586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.634620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.634756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.634790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.634938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.634973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.635110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.635144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.635258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.635291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.635400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.635435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.635639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.635709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.635910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.635964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.636131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.636172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.636373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.636428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.636692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.636748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.636914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.636953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.637080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.637119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.637284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.637348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.637466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.637504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.637658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.637695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.637829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.637864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.638002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.638040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.638213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.638251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.638452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.638496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.638649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.638701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.638855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.638924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.639080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.639120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.639299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.639358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.639491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.639529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.639686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.639721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.639900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.639948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.640090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.640126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.640261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.640298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.640458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.640496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.640625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.640662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.640853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.640888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.640990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.641024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.641133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.641184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.641304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.641342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.641492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.641530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.641672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.641709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.641837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.641872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.641974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.642008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.642178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.642232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.642391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.642433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.642589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.642629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.642784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.642833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.643014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.643052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.643195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.643257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.643467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.643524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.643722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.643767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.643997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.644051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.644174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.644215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.644390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.644451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.644592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.644648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.644796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.644854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.644986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.645027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.645229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.164 [2024-12-06 01:15:18.645291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.164 qpair failed and we were unable to recover it. 00:37:46.164 [2024-12-06 01:15:18.645436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.645493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.645669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.645707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.645833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.645867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.646003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.646056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.646183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.646220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.646363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.646408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.646521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.646558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.646711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.646749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.646908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.646942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.647071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.647109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.647247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.647284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.647423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.647460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.647606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.647643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.647780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.647821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.647962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.648000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.648178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.648234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.648361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.648402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.648546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.648581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.648693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.648728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.648924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.648963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.649146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.649186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.649337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.649377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.649529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.649567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.649700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.649734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.649893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.649927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.650032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.650067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.650249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.650286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.650482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.650545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.650671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.650710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.650863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.650898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.651057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.651091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.651203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.651253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.651439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.651506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.651653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.651688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.651800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.651841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.651985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.652019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.652179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.652216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.652365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.652403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.652533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.652572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.652712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.652747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.652884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.652933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.653076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.653110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.653271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.653309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.653419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.653457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.653589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.653642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.653751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.653786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.653918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.653954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.654079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.654128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.654270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.654306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.654446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.654480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.654582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.654618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.654735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.654769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.654919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.654955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.655090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.655128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.655318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.655356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.655490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.655551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.655695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.655734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.655880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.655914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.656071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.656110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.656286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.656347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.656470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.656508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.656689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.656726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.656904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.656957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.657070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.657107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.657272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.657328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.657488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.657526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.657682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.657716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.657873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.165 [2024-12-06 01:15:18.657942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.165 qpair failed and we were unable to recover it. 00:37:46.165 [2024-12-06 01:15:18.658080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.658120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.658289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.658350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.658507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.658564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.658714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.658753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.658887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.658927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.659050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.659088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.659266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.659303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.659446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.659484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.659636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.659673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.659828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.659881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.660001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.660040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.660235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.660274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.660448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.660485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.660638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.660673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.660780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.660824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.660971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.661026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.661196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.661237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.661413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.661474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.661608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.661647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.661831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.661866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.662018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.662057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.662182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.662232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.662407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.662445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.662600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.662640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.662793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.662840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.662990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.663029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.663222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.663280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.663452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.663510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.663641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.663675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.663796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.663840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-12-06 01:15:18.663976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-12-06 01:15:18.664030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.664177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.664215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.664387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.664442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.664567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.664606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.664748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.664782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.664894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.664929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.665055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.665104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.665250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.665291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.665458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.665496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.665618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.665656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.665843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.665877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.665992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.666026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.666162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.666202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.666346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.666384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.666511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.666554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.666675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.666714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.666855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.666890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.667030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.667080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.667212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.667250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.667398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.667437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.667597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.667650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.667790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.667835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.668007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.668046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.668197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.668234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.668348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.668386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.668591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.668646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.668764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.668826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.668982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.669036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.669214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.669276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.669456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.669527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.669647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.669699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.669806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.669847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.670028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.670082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.670221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.670262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.670422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-12-06 01:15:18.670457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-12-06 01:15:18.670608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.670644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.670797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.670861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.670979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.671016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.671153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.671187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.671299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.671332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.671447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.671480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.671620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.671657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.671841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.671890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.672007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.672045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.672189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.672225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.672361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.672396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.672561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.672595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.672754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.672802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.672931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.672966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.673077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.673112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.673276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.673310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.673443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.673478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.673584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.673618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.673734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.673770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.673905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.673959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.674112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.674150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.674285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.674319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.674427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.674461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.674572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.674606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.674755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.674808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.674969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.675018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.675166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.675203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.675346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.675383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.675521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.675555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.675696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.675730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.675861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.675909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.676026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.676065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.676234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.676269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.676418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.676452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.676583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.676618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.676723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.676757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-12-06 01:15:18.676905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-12-06 01:15:18.676941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.677110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.677158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.677276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.677312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.677451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.677486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.677622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.677657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.677809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.677851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.677969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.678003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.678110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.678143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.678246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.678281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.678445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.678478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.678616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.678651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.678781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.678827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.678941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.678977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.679085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.679119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.679251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.679285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.679418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.679452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.679592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.679629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.679770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.679805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.679951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.679988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.680099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.680132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.680274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.680309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.680467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.680501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.680615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.680651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.680802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.680853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.681018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.681052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.681191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.681225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.681360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.681394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.681530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.681564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.681704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.681738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.681877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.681912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.682044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.682078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.682220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.682255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.682387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.682420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.682569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.682603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.682749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.682797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.682934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.682980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.683163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-12-06 01:15:18.683211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-12-06 01:15:18.683432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.683473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.683625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.683665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.683794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.683837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.684003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.684041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.684163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.684202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.684354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.684393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.684543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.684582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.684742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.684779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.684965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.685004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.685160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.685199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.685316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.685353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.685496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.685533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.685678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.685716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.685854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.685919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.686064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.686105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.686219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.686256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.686431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.686489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.686640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.686673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.686820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.686857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.687009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.687045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.687207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.687244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.687358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.687396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.687541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.687580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.687754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.687787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.687947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.688001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.688149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.688204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.688353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.688400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.688515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.688553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.688676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.688715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.688864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.688900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.689008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.689043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.689152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.689188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-12-06 01:15:18.689329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-12-06 01:15:18.689364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.689473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.689509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.689622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.689657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.689767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.689802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.689915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.689949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.690087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.690121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.690262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.690296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.690483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.690520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.690669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.690707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.690857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.690892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.691033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.691070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.691180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.691215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.691385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.691423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.691572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.691611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.691789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.691835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.691956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.691991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.692134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.692169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.692335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.692369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.692484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.692522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.692692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.692729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.692841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.692892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.693010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.693045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.693155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.693189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.693324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.693357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.693535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.693588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.693735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.693772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.693942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.693977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.694155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.694190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.694323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.694367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.694472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.694506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.694668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.694706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.694891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.694926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.695075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.695124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.695302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.695338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-12-06 01:15:18.695455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-12-06 01:15:18.695495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.695640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.695676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.695804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.695860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.695974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.696008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.696145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.696179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.696352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.696386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.696547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.696581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.696681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.696717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.696828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.696863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.696972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.697006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.697140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.697175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.697315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.697349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.697512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.697545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.697684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.697721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.697841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.697876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.698003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.698052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.698181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.698216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.698352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.698387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.698546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.698579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.698688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.698722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.698853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.698887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.698990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.699026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.699137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.699170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.699331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.699366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.699531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.699566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.699711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.699745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.699863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.699898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.700041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.700075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.700188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.700223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.700355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.700388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.700519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.700553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.700668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.700702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.700839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.700873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.701009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.701044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.701180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.701213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.701320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.701353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.701464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-12-06 01:15:18.701498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-12-06 01:15:18.701627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.701661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.701767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.701800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.701924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.701972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.702106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.702161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.702281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.702318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.702481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.702516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.702652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.702686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.702860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.702896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.703032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.703068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.703208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.703246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.703383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.703417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.703522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.703557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.703694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.703728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.703848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.703884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.704037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.704071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.704206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.704240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.704374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.704408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.704543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.704577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.704707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.704756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.704916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.704954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.705118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.705153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.705256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.705290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.705428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.705462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.705577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.705612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.705752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.705787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.705944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.705989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.706220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.706269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.706386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.706454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.706655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.706716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.706903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.706939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.707107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.707144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.707280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.707314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.707469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.707507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.707652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.707689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.707833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-12-06 01:15:18.707898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-12-06 01:15:18.708017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.708052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.708216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.708251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.708389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.708423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.708572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.708610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.708733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.708774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.708948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.708984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.709124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.709158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.709303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.709337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.709479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.709518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.709643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.709681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.709869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.709918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.710093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.710132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.710271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.710305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.710470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.710505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.710617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.710652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.710821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.710856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.710967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.711001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.711155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.711208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.711378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.711419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.711568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.711607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.711737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.711776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.711966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.712000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.712114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.712149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.712276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.712309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.712437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.712471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.712635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.712668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.712833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.712869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.713010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.713044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.713183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.713217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.713343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.713377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.713515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.713549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.713677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.713710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.713844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.713880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.714016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.714049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.714155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.714189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.714353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.714387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.714548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.714582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.714694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-12-06 01:15:18.714727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-12-06 01:15:18.714863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.714899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.715036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.715069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.715207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.715241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.715352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.715387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.715527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.715562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.715702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.715736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.715845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.715880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.716040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.716073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.716181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.716215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.716363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.716396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.716560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.716598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.716721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.716771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.716914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.716962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.717137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.717173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.717312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.717347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.717488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.717521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.717657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.717691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.717860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.717894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.718007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.718041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.718191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.718224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.718387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.718421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.718533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.718567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.718708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.718743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.718875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.718911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.719027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.719062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.719195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.719229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.719340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.719375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.719530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.719579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.719689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.719726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.719848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.719884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.720048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.720082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.720244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.720277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.720414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.720449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.720586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.720621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.720754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.720788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.720977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.721025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.721182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.721217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.721348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-12-06 01:15:18.721393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-12-06 01:15:18.721631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.721666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.721831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.721883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.722016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.722051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.722155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.722190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.722353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.722386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.722496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.722531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.722676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.722725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.722919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.722968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.723136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.723172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.723299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.723337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.723506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.723560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.723676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.723728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.723839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.723879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.723990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.724026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.724207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.724245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.724500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.724560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.724733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.724772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.724910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.724945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.725053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.725088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.725248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.725283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.725420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.725454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.725612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.725649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.725795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.725840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.726021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.726055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.726233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.726267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.726404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.726438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.726575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.726628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.726791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.726830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.726974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.727008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.727173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.727207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.727363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.727400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.727546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.727583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.727750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.727803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.727960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-12-06 01:15:18.728000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-12-06 01:15:18.728155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.728204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.728349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.728383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.728519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.728553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.728662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.728697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.728804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.728847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.729035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.729079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.729244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.729284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.729457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.729514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.729670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.729705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.729870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.729905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.730044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.730092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.730264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.730301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.730477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.730530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.730724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.730763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.730905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.730939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.731038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.731071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.731210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.731244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.731374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.731411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.731559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.731605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.731757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.731795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.731934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.731968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.732105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.732139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.732236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.732270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.732424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.732461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.732631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.732668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.732800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.732866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.732973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.733006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.733142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.733175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.733310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.733344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.733527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.733564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.733710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.733748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.733922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.733956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.734071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.734104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.734264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.734297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.734400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.734433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.734591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.734629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.734799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.734876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.735027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.735065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-12-06 01:15:18.735223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-12-06 01:15:18.735263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.735470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.735527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.735650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.735688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.735875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.735910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.736015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.736050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.736189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.736222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.736378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.736416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.736580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.736634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.736760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.736798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.736965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.737001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.737113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.737148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.737284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.737318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.737477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.737515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.737679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.737718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.737903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.737952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.738093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.738129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.738265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.738300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.738458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.738493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.738618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.738656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.738835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.738888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.739005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.739059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.739208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.739244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.739381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.739416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.739531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.739565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.739683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.739720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.739880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.739915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.740062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.740097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.740240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.740274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.740437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.740471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.740629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.740662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.740763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.740797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.740922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.740970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.741148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.741184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.741345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.741379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.741558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.741592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.741723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.741757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.741906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.741941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.742056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.742090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-12-06 01:15:18.742224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-12-06 01:15:18.742258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.742370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.742404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.742539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.742572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.742672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.742705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.742846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.742880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.743046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.743082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.743237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.743285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.743402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.743439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.743589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.743624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.743765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.743801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.743945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.743980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.744090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.744124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.744258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.744293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.744435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.744469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.744586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.744621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.744758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.744791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.744959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.744993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.745141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.745175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.745312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.745345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.745507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.745540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.745707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.745742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.745905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.745940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.746076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.746110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.746277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.746311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.746474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.746522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.746643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.746679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.746826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.746862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.746998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.747032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.747195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.747230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.747368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.747403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.747513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.747548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.747686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.747721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.747827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.747861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.747996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.748030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.748163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.748197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.748339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.748373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.748551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.748587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.748727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.748761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-12-06 01:15:18.748919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-12-06 01:15:18.748967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.749146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.749182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.749311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.749346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.749477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.749510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.749649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.749684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.749834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.749870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.749983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.750017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.750127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.750161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.750261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.750296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.750472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.750521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.750651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.750686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.750859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.750899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.751000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.751034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.751172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.751205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.751307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.751340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.751504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.751539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.751654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.751691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.751865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.751901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.752011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.752046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.752182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.752216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.752350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.752383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.752516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.752551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.752689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.752724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.752872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.752907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.753011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.753046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.753214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.753249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.753426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.753461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.753621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.753667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.753809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.753849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.753981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.754015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.754176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.754210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.754347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.754382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.754491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.754524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.754666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.754703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.754808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.754852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.754958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.754993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-12-06 01:15:18.755131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-12-06 01:15:18.755164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.755279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.755312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.755431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.755466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.755604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.755639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.755772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.755806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.755954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.755988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.756119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.756152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.756316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.756349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.756499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.756534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.756676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.756711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.756854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.756889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.757019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.757054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.757224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.757257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.757420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.757453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.757560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.757594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.757729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.757768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.757884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.757919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.758054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.758087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.758218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.758252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.758385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.758418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.758552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.758586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.758687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.758720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.758864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.758912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.759061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.759098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.759205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.759241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.759403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.759438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.759581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.759615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.759758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.759794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.759939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.759974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.760089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.760128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.760265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.760299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.760417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.760450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.760584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.760617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.760749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.760782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.760954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.760988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.761098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.761132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.761293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.761326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-12-06 01:15:18.761466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-12-06 01:15:18.761499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.761640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.761674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.761837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.761870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.762030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.762079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.762231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.762267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.762410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.762445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.762562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.762596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.762760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.762797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.762958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.762992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.763122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.763156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.763297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.763331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.763436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.763470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.763642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.763677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.763824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.763859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.763962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.763995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.764125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.764159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.764267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.764302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.764441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.764475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.764622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.764662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.764804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.764851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.764991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.765026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.765165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.765199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.765359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.765394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.765523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.765557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.765671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.765705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.765843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.765877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.766038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.766071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.766177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.766210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.766338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.766372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.766510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.766544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.766720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.766755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.766908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.766943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.767052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.767086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.767223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.767257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.767425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.767459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.767593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.767626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.767764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.767800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.767921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.767955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-12-06 01:15:18.768095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-12-06 01:15:18.768129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.768245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.768278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.768416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.768450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.768613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.768647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.768793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.768832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.768997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.769031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.769166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.769200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.769315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.769349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.769461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.769494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.769619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.769653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.769761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.769794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.769962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.769996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.770129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.770162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.770299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.770333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.770491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.770524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.770661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.770693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.770828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.770862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.770993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.771026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.771191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.771224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.771357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.771391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.771529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.771567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.771706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.771739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.771883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.771917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.772080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.772114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.772230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.772263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.772397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.772430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.772570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.772603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.772735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.772769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.772916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.772949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.773062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.773097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.773208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.773241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.773401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.773435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.773594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.773627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.773727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.773761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.773952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.774001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.774121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.774159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.774297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.774332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.774480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.774515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-12-06 01:15:18.774664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-12-06 01:15:18.774704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.774874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.774919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.775056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.775099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.775264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.775298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.775434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.775468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.775599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.775632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.775763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.775800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.775944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.775978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.776085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.776120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.776288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.776323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.776458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.776491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.776595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.776629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.776763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.776797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.776991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.777040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.777207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.777241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.777378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.777411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.777569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.777603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.777737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.777770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.777895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.777930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.778090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.778125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.778262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.778295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.778456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.778490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.778631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.778672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.778829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.778879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.779005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.779042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.779207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.779241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.779351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.779384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.779542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.779575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.779688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.779722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.779858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.779907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.780054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.780091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.780202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.780237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.780374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.780408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.780570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.780604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.780708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.780742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.780863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.780897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.781018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.781052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.781185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.781218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-12-06 01:15:18.781327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-12-06 01:15:18.781360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.781495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.781529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.781690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.781724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.781839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.781875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.782015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.782049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.782192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.782226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.782397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.782430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.782564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.782599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.782729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.782763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.782881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.782916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.783063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.783097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.783231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.783265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.783422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.783455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.783587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.783621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.783754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.783788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.783949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.783984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.784130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.784165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.784304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.784339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.784472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.784505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.784643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.784679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.784824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.784860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.784969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.785004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.785148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.785182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.785290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.785323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.785424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.785462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.785628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.785665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.785812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.785879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.786014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.786050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.786210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.786245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.786407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.786440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.786584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.786618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.786726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.786760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.786940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.786973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.787107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.787142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-12-06 01:15:18.787278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-12-06 01:15:18.787311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.787438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.787471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.787609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.787645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.787750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.787784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.787956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.788005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.788144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.788179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.788314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.788347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.788454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.788487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.788657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.788690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.788825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.788864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.789005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.789040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.789179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.789214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.789346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.789380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.789541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.789574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.789709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.789743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.789858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.789893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.790026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.790060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.790227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.790261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.790397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.790431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.790552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.790586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.790725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.790759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.790898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.790931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.791033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.791066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.791169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.791202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.791337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.791370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.791501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.791537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.791692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.791742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.791893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.791932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.792070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.792104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.792220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.792254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.792415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.792453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.792588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.792622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.792781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.792821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.792934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.792967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.793075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.793110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.793259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.793293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.793423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.793457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.793593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.793626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-12-06 01:15:18.793748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-12-06 01:15:18.793798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.793955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.793993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.794107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.794142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.794285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.794321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.794451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.794486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.794616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.794650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.794791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.794837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.794979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.795012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.795129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.795167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.795301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.795336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.795445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.795481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.795645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.795680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.795796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.795845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.795984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.796019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.796124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.796158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.796293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.796327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.796459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.796493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.796602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.796640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.796780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.796824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.796947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.796983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.797121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.797156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.797282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.797317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.797468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.797503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.797621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.797656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.797794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.797837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.797972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.798008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.798114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.798148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.798255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.798290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.798423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.798457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.798601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.798637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.798795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.798860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.799006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.799042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.799206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.799246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.799390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.799423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.799559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.799593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.799731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.799765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.799886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.799921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.800085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.800119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-12-06 01:15:18.800285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-12-06 01:15:18.800319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.800431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.800466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.800605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.800639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.800776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.800810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.800980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.801014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.801176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.801210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.801356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.801390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.801524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.801568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.801681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.801715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.801841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.801876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.802029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.802077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.802249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.802285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.802448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.802483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.802592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.802627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.802747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.802799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.803006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.803054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.803202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.803238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.803397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.803432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.803576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.803609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.803718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.803751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.803890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.803925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.804032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.804067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.804178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.804215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.804363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.804398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.804539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.804574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.804683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.804718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.804881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.804916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.805052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.805087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.805200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.805234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.805370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.805403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.805568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.805602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.805709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.805743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.805903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.805937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.806035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.806070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.806219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.806258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.806396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.806431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.806560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.806595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.806706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.806740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-12-06 01:15:18.806909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-12-06 01:15:18.806943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.807073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.807106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.807244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.807277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.807415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.807448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.807611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.807646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.807774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.807831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.807953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.807990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.808126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.808162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.808326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.808361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.808496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.808530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.808674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.808710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.808850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.808886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.808997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.809032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.809165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.809199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.809361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.809395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.809558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.809593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.809733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.809769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.809904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.809953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.810099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.810134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.810296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.810331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.810462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.810496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.810602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.810635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.810829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.810878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.811013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.811061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.811233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.811270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.811407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.811442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.811550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.811584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.811689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.811725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.811878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.811926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.812068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-12-06 01:15:18.812103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-12-06 01:15:18.812264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.812298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.812444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.812478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.812588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.812627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.812770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.812805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.812953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.812989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.813124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.813159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.813268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.813307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.813443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.813478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.813617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.813652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.813807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.813866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.813982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.814016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.814162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.814195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.814351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.814384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.814485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.814519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.814659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.814701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.814843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.814878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.814991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.815026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.815206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.815241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.815348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.815381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.815522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.815556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.815693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.815726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.815880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.815918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.816052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.816085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.816221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.816254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.816411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.816444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.816561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.816595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.816753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.816787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.816963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.817010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.817176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.817224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.817378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.817414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.817589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.817624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.817757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.817798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.817972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.818006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.818185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.818219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.818340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.818394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.818558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.818623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.818764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.818801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.819015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.819056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.819169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.819204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.819375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.819408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.819536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.819574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.819725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.819762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.819943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.819978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.820114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.820148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.820251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.820285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.820446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.820480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.820659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-12-06 01:15:18.820701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-12-06 01:15:18.820876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.820926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.821054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.821102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.821274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.821314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.821486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.821543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.821688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.821739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.821846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.821881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.822048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.822083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.822218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.822251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.822438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.822475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.822594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.822633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.822858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.822922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.823098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.823138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.823289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.823324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.823441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.823476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.823636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.823675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.823868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.823904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.824017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.824051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.824180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.824214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.824335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.824370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.824508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.824542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.824696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.824734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.824921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.824956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.825110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.825159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.825305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.825341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.825483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.825517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.825678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.825712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.825852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.825897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.826068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.826117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.826266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.826304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.826480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.826514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.826676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.826714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.826879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.826919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.827032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.827072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.827233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.827267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.827377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.827429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.827579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.827616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.827763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.827796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.827955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.828004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.828179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.828214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.828424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.828495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.828616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.828655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.828808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.828854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.829025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.829060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.829205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.829239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.829391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.829457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.829633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.829671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.829868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.829904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.830022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.830061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.830199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.830235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.830346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.830381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.830531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.830566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.830702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.830754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.830903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-12-06 01:15:18.830940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-12-06 01:15:18.831097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-12-06 01:15:18.831145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-12-06 01:15:18.831311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-12-06 01:15:18.831351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-12-06 01:15:18.831518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-12-06 01:15:18.831557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-12-06 01:15:18.831707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-12-06 01:15:18.831745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-12-06 01:15:18.831911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-12-06 01:15:18.831945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.474 [2024-12-06 01:15:18.832078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.474 [2024-12-06 01:15:18.832111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.474 qpair failed and we were unable to recover it. 00:37:46.474 [2024-12-06 01:15:18.832223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.474 [2024-12-06 01:15:18.832257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.474 qpair failed and we were unable to recover it. 00:37:46.474 [2024-12-06 01:15:18.832402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.474 [2024-12-06 01:15:18.832440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.474 qpair failed and we were unable to recover it. 00:37:46.474 [2024-12-06 01:15:18.832551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.832588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.832700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.832736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.832947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.832981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.833156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.833192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.833361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.833415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.833583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.833647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.833782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.833824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.833947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.833980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.834084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.834117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.834243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.834282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.834408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.834444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.834610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.834647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.834837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.834875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.834993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.835027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.835187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.835220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.835360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.835398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.835548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.835599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.835730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.835767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.835948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.835982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.836096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.836129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.836261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.836295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.836456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.836495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.836627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.836660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.836877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.836911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.837010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.837044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.837209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.837242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.837388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.837421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.837538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.837575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.837716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.837753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.837913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.837961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.838080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.838117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.838227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.838262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.838410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.838444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.838598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.838635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.838783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.838836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.838996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.839031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.839140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.475 [2024-12-06 01:15:18.839175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.475 qpair failed and we were unable to recover it. 00:37:46.475 [2024-12-06 01:15:18.839316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.839350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.839486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.839521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.839704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.839741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.839875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.839910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.840022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.840056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.840164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.840198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.840333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.840368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.840546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.840580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.840705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.840748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.840913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.840962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.841112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.841146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.841294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.841352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.841530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.841567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.841716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.841753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.841911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.841945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.842096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.842146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.842294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.842332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.842455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.842493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.842661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.842699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.842875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.842909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.843030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.843064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.843218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.843256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.843421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.843458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.843611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.843647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.843800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.843861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.844011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.844045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.844162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.844206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.844325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.844362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.844510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.844546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.844675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.844709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.844835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.844876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.845015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.845049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.845160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.845193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.845320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.845353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.845455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.845488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.845654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.845692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.845838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.845893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.476 qpair failed and we were unable to recover it. 00:37:46.476 [2024-12-06 01:15:18.846009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.476 [2024-12-06 01:15:18.846043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.846173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.846206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.846345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.846379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.846517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.846550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.846734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.846771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.846992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.847041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.847203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.847241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.847381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.847427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.847539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.847575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.847736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.847771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.847923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.847958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.848126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.848165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.848327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.848361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.848498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.848532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.848691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.848736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.848881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.848915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.849055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.849094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.849210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.849246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.849346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.849381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.849550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.849584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.849725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.849759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.849916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.849950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.850067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.850102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.850243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.850277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.850411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.850447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.850582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.850623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.850758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.850792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.850972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.851021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.851171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.851208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.851370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.851406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.851521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.851556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.851693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.851728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.851905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.851954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.852106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.852142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.852305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.852339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.852442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.852476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.852614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.852648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.852781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.477 [2024-12-06 01:15:18.852824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-12-06 01:15:18.853002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.853036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.853160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.853208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.853353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.853388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.853527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.853564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.853700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.853735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.853878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.853914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.854023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.854065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.854196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.854231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.854387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.854421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.854582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.854616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.854759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.854796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.854982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.855017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.855135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.855169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.855307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.855347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.855484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.855519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.855655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.855689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.855833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.855894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.856039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.856086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.856219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.856255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.856412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.856447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.856606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.856641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.856820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.856863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.856998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.857034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.857213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.857258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.857440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.857499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.857638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.857676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.857788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.857837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.858044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.858092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.858252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.858293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.858448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.858486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.858697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.858755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.858933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.858967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.859116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.859165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.859284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.859321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.859466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.859560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.859708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.859746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.859938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.859987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-12-06 01:15:18.860132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.478 [2024-12-06 01:15:18.860169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.860312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.860347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.860442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.860477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.860670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.860708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.860894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.860929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.861095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.861130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.861270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.861305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.861443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.861477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.861639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.861674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.861810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.861858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.861986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.862020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.862157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.862192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.862355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.862394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.862546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.862583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.862727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.862765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.862928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.862963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.863129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.863167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.863282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.863317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.863447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.863485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.863633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.863671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.863783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.863836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.864000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.864035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.864150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.864198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.864364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.864400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.864560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.864595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.864734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.864769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.864943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.864977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.865093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.865128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.865268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.865309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.865420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.865455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.865594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.865628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.865788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.865830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.865960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.479 [2024-12-06 01:15:18.866008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-12-06 01:15:18.866167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.866203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.866368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.866402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.866506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.866539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.866644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.866678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.866851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.866886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.867045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.867078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.867241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.867275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.867411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.867444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.867580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.867613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.867753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.867788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.867934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.867983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.868167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.868216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.868361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.868397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.868535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.868570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.868709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.868744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.868883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.868918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.869031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.869066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.869202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.869235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.869371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.869404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.869541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.869575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.869726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.869775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.869928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.869966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.870080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.870115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.870222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.870263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.870426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.870461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.870575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.870611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.870738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.870785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.870911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.870947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.871086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.871121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.871256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.871289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.871392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.871425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.871567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.871603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.871712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.871749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.871865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.871900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.872065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.872099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.872239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.872273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.872434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.872468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.872611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.480 [2024-12-06 01:15:18.872646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.480 qpair failed and we were unable to recover it. 00:37:46.480 [2024-12-06 01:15:18.872799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.872859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.872983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.873022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.873162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.873197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.873339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.873375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.873513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.873549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.873692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.873727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.873863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.873898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.874032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.874067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.874177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.874214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.874402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.874449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.874572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.874609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.874778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.874813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.874946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.874981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.875100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.875138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.875272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.875306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.875449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.875485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.875619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.875653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.875803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.875861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.876044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.876082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.876197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.876233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.876362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.876396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.876530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.876564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.876716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.876755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.876894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.876930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.877109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.877157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.877276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.877319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.877483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.877517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.877625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.877659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.877769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.877803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.877957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.878005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.878147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.878182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.878344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.878378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.878538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.878572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.878706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.878740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.878915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.878951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.879063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.879098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.879232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.879266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.879418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.481 [2024-12-06 01:15:18.879451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.481 qpair failed and we were unable to recover it. 00:37:46.481 [2024-12-06 01:15:18.879610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.879643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.879754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.879788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.879912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.879957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.880147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.880193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.880316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.880354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.880532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.880593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.880746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.880780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.880977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.881012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.881165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.881200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.881406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.881470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.881660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.881719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.881876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.881911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.882049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.882083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.882220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.882254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.882428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.882462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.882606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.882643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.882794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.882838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.882969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.883003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.883136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.883170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.883307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.883340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.883472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.883505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.883666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.883704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.883886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.883920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.884046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.884079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.884236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.884269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.884409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.884443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.884576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.884609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.884764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.884806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.884958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.884992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.885124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.885158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.885292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.885327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.885491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.885524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.885651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.885685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.885811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.885872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.886019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.886053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.886194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.886227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.886374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.482 [2024-12-06 01:15:18.886411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.482 qpair failed and we were unable to recover it. 00:37:46.482 [2024-12-06 01:15:18.886537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.886573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.886719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.886756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.886888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.886922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.887083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.887117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.887270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.887304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.887450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.887487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.887607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.887645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.887832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.887884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.888026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.888059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.888219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.888253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.888416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.888449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.888607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.888640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.888753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.888787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.888972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.889020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.889129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.889167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.889308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.889343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.889506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.889541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.889706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.889767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.889965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.890007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.890165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.890200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.890335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.890368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.890500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.890533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.890704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.890741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.890909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.890959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.891140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.891177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.891333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.891372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.891527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.891567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.891744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.891782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.891968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.892011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.892193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.892227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.892392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.892431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.892572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.892605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.892786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.892830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.893005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.893038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.893149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.893184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.893473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.893531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.893672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.893710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.893882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.483 [2024-12-06 01:15:18.893916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.483 qpair failed and we were unable to recover it. 00:37:46.483 [2024-12-06 01:15:18.894101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.894134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.894265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.894299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.894460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.894493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.894620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.894658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.894832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.894884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.895034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.895083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.895261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.895298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.895412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.895448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.895559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.895614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.895774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.895809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.895977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.896037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.896158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.896193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.896349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.896389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.896531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.896567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.896706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.896743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.896886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.896923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.897064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.897101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.897248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.897283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.897416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.897455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.897615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.897653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.897773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.897832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.897976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.898010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.898150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.898186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.898356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.898391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.898557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3142551 Killed "${NVMF_APP[@]}" "$@" 00:37:46.484 [2024-12-06 01:15:18.898597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.898748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.898786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.898926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.898960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.899065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.899109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 01:15:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:37:46.484 [2024-12-06 01:15:18.899242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.899277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.899438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.899473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 01:15:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:46.484 [2024-12-06 01:15:18.899634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 [2024-12-06 01:15:18.899673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 [2024-12-06 01:15:18.899861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.484 01:15:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:46.484 [2024-12-06 01:15:18.899896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.484 qpair failed and we were unable to recover it. 00:37:46.484 01:15:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:46.485 [2024-12-06 01:15:18.900023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.900072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.900226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 01:15:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:46.485 [2024-12-06 01:15:18.900263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.900380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.900414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.900528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.900563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.900687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.900726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.900887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.900926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.901066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.901100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.901248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.901283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.901418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.901454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.901569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.901605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.901747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.901785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.901947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.901995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.902111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.902147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.902307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.902346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.902522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.902559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.902684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.902722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.902873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.902909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.903019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.903053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.903198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.903233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.903384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.903423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.903574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.903612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.903729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.903768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.903941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.903975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.904099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.904133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.904241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.904281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.904484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.904522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.904664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.904702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.904839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.904874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.904990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.905024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.905161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.905195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.905356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.905391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.905548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 01:15:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3143177 00:37:46.485 [2024-12-06 01:15:18.905586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 01:15:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:46.485 [2024-12-06 01:15:18.905700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 01:15:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3143177 00:37:46.485 [2024-12-06 01:15:18.905739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 [2024-12-06 01:15:18.905916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.905951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.485 01:15:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3143177 ']' 00:37:46.485 [2024-12-06 01:15:18.906068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.485 [2024-12-06 01:15:18.906102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.485 qpair failed and we were unable to recover it. 00:37:46.486 01:15:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:46.486 [2024-12-06 01:15:18.906234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.906273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.486 01:15:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.906413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 01:15:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:46.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:46.486 [2024-12-06 01:15:18.906447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 01:15:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:46.486 [2024-12-06 01:15:18.906619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.906657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 01:15:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:46.486 [2024-12-06 01:15:18.906789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.906867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.907061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.907109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.907338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.907375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.907517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.907551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.907657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.907693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.907870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.907907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.908048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.908085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.908236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.908274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.908428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.908474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.908588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.908628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.908774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.908812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.908967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.909001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.909115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.909151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.909314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.909347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.909529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.909566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.909736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.909774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.909946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.909980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.910140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.910174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.910308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.910341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.910478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.910512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.910660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.910697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.910869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.910934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.911103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.911152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.911298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.911335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.911475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.911511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.911624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.911660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.911798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.911842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.911987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.912022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.912158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.912193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.912327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.912361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.912465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.912499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.912601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.486 [2024-12-06 01:15:18.912635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.486 qpair failed and we were unable to recover it. 00:37:46.486 [2024-12-06 01:15:18.912796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.912842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.912996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.913030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.913172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.913206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.913328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.913374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.913568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.913622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.913823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.913882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.914029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.914065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.914208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.914242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.914377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.914411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.914542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.914577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.914728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.914766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.914947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.914982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.915095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.915129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.915264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.915297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.915513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.915546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.915711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.915747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.915894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.915935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.916071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.916105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.916220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.916254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.916464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.916533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.916714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.916752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.916921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.916957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.917065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.917100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.917216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.917250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.917382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.917418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.917572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.917610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.917750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.917788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.917971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.918019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.918200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.918236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.918346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.918381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.918607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.918640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.918780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.918820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.918994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.919027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.919168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.919204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.919341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.919375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.919537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.919572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.919678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.919712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.487 qpair failed and we were unable to recover it. 00:37:46.487 [2024-12-06 01:15:18.919890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.487 [2024-12-06 01:15:18.919925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.920031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.920076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.920240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.920274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.920461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.920499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.920674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.920711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.920887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.920923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.921105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.921149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.921330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.921388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.921596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.921635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.921783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.921828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.921985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.922034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.922228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.922269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.922415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.922453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.922661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.922728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.922867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.922904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.923076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.923110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.923217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.923258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.923399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.923433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.923598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.923635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.923779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.923842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.923975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.924013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.924154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.924192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.924307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.924345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.924487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.924526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.924699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.924737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.924924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.924959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.925119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.925158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.925299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.925337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.925483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.925521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.925664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.925703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.925868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.925903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.926014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.926049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.926223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.926261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.488 [2024-12-06 01:15:18.926411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.488 [2024-12-06 01:15:18.926461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.488 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.926610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.926648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.926764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.926798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.926985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.927034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.927185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.927222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.927408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.927446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.927590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.927627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.927753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.927806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.927948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.927983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.928100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.928135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.928301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.928335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.928492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.928530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.928706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.928744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.928928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.928977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.929119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.929155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.929264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.929299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.929405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.929439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.929591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.929665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.929824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.929878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.930013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.930048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.930217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.930289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.930471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.930527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.930679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.930719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.930885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.930921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.931025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.931060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.931206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.931242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.931439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.931526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.931675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.931714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.931839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.931890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.931998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.932053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.932239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.932292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.932452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.932493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.932642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.932680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.932872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.932907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.933067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.933109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.933232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.933269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.933384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.933422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.933531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.933569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.489 qpair failed and we were unable to recover it. 00:37:46.489 [2024-12-06 01:15:18.933745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.489 [2024-12-06 01:15:18.933782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.933977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.934013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.934134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.934168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.934345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.934383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.934536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.934575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.934715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.934753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.935004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.935038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.935259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.935317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.935579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.935636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.935833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.935867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.935976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.936012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.936184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.936219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.936386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.936425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.936550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.936587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.936737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.936774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.936972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.937018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.937208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.937243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.937382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.937417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.937524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.937559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.937695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.937734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.937858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.937911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.938051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.938086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.938218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.938251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.938413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.938446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.938573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.938607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.938763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.938808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.938950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.938999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.939151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.939186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.939316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.939355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.939489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.939524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.939651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.939685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.939866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.939918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.940053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.940087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.940232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.940266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.940437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.940474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.940611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.940648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.940823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.940876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.940990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.490 [2024-12-06 01:15:18.941023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.490 qpair failed and we were unable to recover it. 00:37:46.490 [2024-12-06 01:15:18.941174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.941209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.941370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.941405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.941534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.941573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.941723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.941761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.941921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.941971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.942114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.942150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.942265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.942300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.942460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.942493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.942668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.942705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.942854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.942908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.943042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.943075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.943243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.943276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.943375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.943427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.943577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.943613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.943770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.943809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.943951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.944000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.944183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.944219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.944388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.944452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.944596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.944635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.944785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.944847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.944988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.945021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.945160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.945193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.945308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.945342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.945475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.945508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.945666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.945703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.945860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.945910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.946060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.946109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.946274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.946315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.946464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.946503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.946937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.946977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.947128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.947169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.947294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.947330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.947496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.947530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.947692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.947730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.947923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.947972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.948102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.948137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.948320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.948357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.491 [2024-12-06 01:15:18.948615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.491 [2024-12-06 01:15:18.948675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.491 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.948829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.948880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.948980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.949013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.949178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.949214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.949346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.949394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.949610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.949668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.949840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.949876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.949991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.950025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.950133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.950167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.950302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.950336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.950476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.950512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.950672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.950711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.950904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.950939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.951101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.951143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.951277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.951312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.951423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.951459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.951634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.951670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.951810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.951850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.952007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.952041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.952179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.952213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.952372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.952410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.952523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.952558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.952701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.952736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.952900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.952935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.953066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.953111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.953253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.953287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.953529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.953590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.953741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.953779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.953947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.953982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.954118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.954152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.954289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.954323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.954428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.954463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.954619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.954656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.954835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.954887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.954994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.955031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.955193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.955227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.955325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.955359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.955519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.955553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.492 [2024-12-06 01:15:18.955711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.492 [2024-12-06 01:15:18.955749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.492 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.955918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.955966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.956086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.956126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.956259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.956294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.956429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.956463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.956587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.956621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.956761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.956794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.956973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.957008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.957155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.957195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.957303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.957343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.957472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.957507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.957643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.957678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.957834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.957868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.958030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.958064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.958208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.958242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.958347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.958381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.958541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.958575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.958714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.958748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.958888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.958922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.959086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.959125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.959285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.959319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.959455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.959489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.959621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.959660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.959832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.959866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.959966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.960000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.960141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.960176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.960306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.960341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.960471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.960505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.960669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.960704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.960841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.960875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.960987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.961020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.961183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.961217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.961328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.961362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.961539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.961574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.961684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-12-06 01:15:18.961719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-12-06 01:15:18.961877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.961912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.962047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.962080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.962219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.962253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.962390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.962425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.962558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.962592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.962730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.962764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.962937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.962971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.963131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.963165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.963297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.963342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.963504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.963538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.963673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.963707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.963870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.963918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.964077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.964132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.964275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.964312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.964431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.964467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.964616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.964652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.964785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.964842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.964951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.964985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.965147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.965196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.965357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.965393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.965560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.965596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.965737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.965771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.965886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.965921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.966053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.966088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.966219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.966255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.966416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.966462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.966595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.966629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.966728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.966768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.966919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.966955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.967107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.967156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.967309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.967344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.967507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.967542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.967679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.967714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.967853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.967888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.967998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.968032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.968174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.968208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.968340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.968373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-12-06 01:15:18.968506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-12-06 01:15:18.968541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.968704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.968740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.968900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.968950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.969065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.969103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.969242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.969276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.969396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.969431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.969584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.969618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.969718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.969751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.969919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.969953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.970084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.970124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.970256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.970291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.970452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.970485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.970596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.970630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.970756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.970805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.970946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.970995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.971174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.971210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.971369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.971404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.971519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.971554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.971686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.971720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.971884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.971919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.972039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.972077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.972211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.972246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.972385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.972419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.972530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.972565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.972732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.972771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.972948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.972982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.973090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.973124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.973286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.973319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.973464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.973497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.973603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.973636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.973805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.973851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.973989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.974038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.974199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.974235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.974374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.974408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.974542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.974576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.974708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.974742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.974920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.974955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.975085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.975133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-12-06 01:15:18.975246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-12-06 01:15:18.975282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.975422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.975457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.975608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.975642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.975826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.975876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.975994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.976030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.976178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.976214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.976384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.976419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.976523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.976557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.976668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.976704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.976883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.976932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.977049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.977084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.977252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.977286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.977391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.977424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.977564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.977597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.977710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.977745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.977905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.977941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.978046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.978081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.978244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.978293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.978463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.978499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.978658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.978696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.978845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.978897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.979036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.979072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.979207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.979241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.979358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.979391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.979510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.979546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.979676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.979711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.979823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.979858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.979962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.979996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.980113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.980146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.980306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.980339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.980484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.980517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.980619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.980654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.980809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.980871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.981015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.981052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.981170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.981204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.981359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.981393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.981553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.981588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-12-06 01:15:18.981746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-12-06 01:15:18.981780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.981888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.981923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.982053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.982087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.982223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.982256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.982382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.982415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.982536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.982575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.982685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.982721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.982853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.982889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.982996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.983031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.983175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.983210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.983341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.983376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.983514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.983548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.983707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.983744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.983937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.983971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.984080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.984114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.984282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.984316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.984414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.984447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.984582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.984615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.984739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.984772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.984919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.984954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.985090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.985123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.985252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.985285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.985425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.985458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.985588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.985621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.985725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.985758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.985899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.985933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.986065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.986099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.986203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.986237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.986373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.986406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.986538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.986572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.986679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.986714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.986860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.986894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.987055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.987089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.987223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.987256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-12-06 01:15:18.987396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-12-06 01:15:18.987429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.987561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.987599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.987726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.987759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.987901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.987935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.988033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.988067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.988203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.988236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.988398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.988431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.988564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.988598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.988703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.988736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.988886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.988935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.989085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.989134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.989281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.989318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.989473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.989509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.989657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.989693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.989851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.989903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.990023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.990056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.990191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.990224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.990383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.990416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.990546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.990579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.990719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.990758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.990902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.990937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.991074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.991110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.991271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.991306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.991415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.991449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.991611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.991646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.991786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.991826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.991987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.992020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.992162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.992198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.992348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.992383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.992548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.992581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.992692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.992726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.992889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.992923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.993061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.993096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.993257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.993291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.993426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.993460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.993588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.993622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.993749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.993797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.993928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-12-06 01:15:18.993977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-12-06 01:15:18.994151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.994188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.994353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.994389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.994499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.994534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.994696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.994736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.994846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.994882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.995043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.995091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.995211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.995246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.995390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.995426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.995560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.995595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.995753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.995791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.995979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.996013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.996194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.996243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.996384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.996421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.996596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.996631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.996768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.996803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.996951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.996985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.997133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.997182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.997335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.997370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.997476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.997510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.997639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.997673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.997812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.997852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.997992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.998025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.998131] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:37:46.499 [2024-12-06 01:15:18.998196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.998232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.998260] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:46.499 [2024-12-06 01:15:18.998335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.998369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.998486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.998519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.998656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.998690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.998800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.998843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.999018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.999052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.999191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.999224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.999400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.999433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.999593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.999627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.999740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.999773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:18.999912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:18.999945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:19.000072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:19.000106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:19.000235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:19.000269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:19.000406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:19.000439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-12-06 01:15:19.000591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-12-06 01:15:19.000628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.000803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.000877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.001024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.001058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.001198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.001232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.001389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.001423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.001556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.001589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.001744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.001791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.001950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.001998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.002105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.002142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.002254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.002289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.002456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.002496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.002665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.002720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.002892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.002927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.003062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.003095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.003226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.003260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.003400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.003434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.003592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.003629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.003798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.003842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.003997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.004031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.004145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.004183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.004347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.004381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.004479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.004513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.004672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.004710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.004871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.004907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.005042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.005076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.005227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.005265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.005380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.005418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.005536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.005574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.005693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.005730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.005895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.005977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.006159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.006196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.006370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.006409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.006586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.006625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.006806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.006872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.007005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.007039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.007206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.007242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.007349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.007383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.007506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-12-06 01:15:19.007544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-12-06 01:15:19.007666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.007704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.007870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.007904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.008011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.008046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.008179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.008212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.008377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.008411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.008528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.008566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.008759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.008820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.009000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.009049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.009197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.009239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.009405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.009441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.009581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.009615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.009771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.009825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.009977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.010013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.010129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.010167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.010306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.010341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.010526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.010598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.010750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.010788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.010956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.010990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.011147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.011196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.011318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.011355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.011618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.011657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.011801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.011849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.011985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.012020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.012180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.012214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.012353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.012389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.012548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.012582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.012765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.012803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.012942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.012977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.013124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.013158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.013294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.013328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.013427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.013461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.013599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.013633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.013740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.013774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.013921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.013958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.014116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.014164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.014311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.014348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.014480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.014515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.014649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-12-06 01:15:19.014682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-12-06 01:15:19.014788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.014830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.014994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.015028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.015135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.015171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.015306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.015340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.015451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.015487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.015652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.015686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.015824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.015859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.015960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.015994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.016108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.016142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.016279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.016314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.016454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.016503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.016641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.016675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.016827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.016882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.017016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.017050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.017180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.017214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.017346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.017381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.017557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.017606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.017777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.017821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.017932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.017968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.018084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.018119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.018277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.018311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.018447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.018482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.018623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.018660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.018799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.018839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.018976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.019011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.019148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.019182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.019294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.019328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.019433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.019467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.019598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.019632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.019753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.019800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.019957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.019994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.020157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.020192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.020328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.020362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.020533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.020567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-12-06 01:15:19.020705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-12-06 01:15:19.020741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.020885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.020920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.021067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.021107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.021222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.021257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.021391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.021424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.021532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.021566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.021732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.021781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.021986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.022035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.022177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.022225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.022335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.022371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.022518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.022555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.022713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.022752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.022913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.022949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.023102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.023150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.023293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.023329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.023470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.023504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.023603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.023643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.023755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.023788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.023908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.023944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.024057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.024092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.024199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.024233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.024372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.024406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.024542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.024576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.024706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.024740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.024861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.024897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.025029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.025064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.025173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.025208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.025343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.025377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.025510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.025545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.025680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.025714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.025858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.025895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.026003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.026038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.026151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.026184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.026346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.026380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.026509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.026543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.026651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.026686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.026853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.026889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.026996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-12-06 01:15:19.027031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-12-06 01:15:19.027148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.027182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.027287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.027321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.027480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.027513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.027665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.027702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.027862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.027897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.028069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.028104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.028214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.028249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.028412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.028446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.028550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.028585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.028716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.028751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.028917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.028953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.029068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.029102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.029201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.029234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.029363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.029397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.029531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.029565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.029712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.029745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.029877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.029913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.030083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.030119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.030255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.030294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.030413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.030448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.030582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.030617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.030756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.030792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.030952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.030986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.031154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.031190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.031322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.031357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.031515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.031550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.031699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.031734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.031842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.031877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.032001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.032037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.032189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.032224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.032329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.032362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.032496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.032530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.032672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.032706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.032851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.032885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.033000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.033033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.033173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.033219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.033357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.033391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-12-06 01:15:19.033541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-12-06 01:15:19.033577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.033746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.033780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.033945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.033991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.034178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.034213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.034353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.034387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.034523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.034557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.034676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.034714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.034846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.034897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.035009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.035044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.035184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.035218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.035353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.035386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.035524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.035558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.035686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.035723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.035894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.035930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.036094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.036132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.036245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.036278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.036407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.036441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.036606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.036642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.036760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.036795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.036924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.036960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.037106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.037140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.037243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.037282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.037416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.037451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.037584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.037637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.037829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.037864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.038028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.038062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.038201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.038235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.038397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.038434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.038582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.038619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.038765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.038825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.038946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.038981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.039138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.039171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.039329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.039363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.039604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.039670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.039865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.039900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.040088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.040127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.040255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.040290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.040411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.040445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-12-06 01:15:19.043837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-12-06 01:15:19.043895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.044071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.044107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.044221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.044256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.044368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.044402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.044535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.044570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.044707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.044742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.044896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.044931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.045069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.045109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.045213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.045248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.045386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.045421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.045534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.045569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.045711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.045745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.045935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.045983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.046147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.046184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.046327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.046362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.046478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.046514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.046653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.046688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.046837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.046873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.047017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.047052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.047202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.047236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.047351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.047386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.047532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.047567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.047734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.047773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.047968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.048008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.048189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.048238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.048382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.048420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.048556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.048591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.048712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.048748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.048890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.048925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.049065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.049100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.049274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.049309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.049418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.049453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.049587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.049622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.049775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.049827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.049996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.050031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-12-06 01:15:19.050148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-12-06 01:15:19.050181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.050299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.050333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.050510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.050544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.050679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.050713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.050834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.050868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.051006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.051040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.051207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.051241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.051376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.051411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.051634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.051668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.051807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.051848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.051956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.051990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.052123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.052157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.052319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.052353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.052464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.052499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.052664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.052702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.052898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.052932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.053094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.053135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.053243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.053277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.053417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.053452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.053551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.053585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.053747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.053781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.053902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.053936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.054104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.054137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.054281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.054315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.054421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.054455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.054572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.054606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.054712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.054747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.054890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.054924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.055057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.055103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.055269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.055302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.055461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.055496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.055634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.055668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.055809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.055849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.056012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.056047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.056209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.056243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.056382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.056416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.056523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-12-06 01:15:19.056557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-12-06 01:15:19.056712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.056746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.056883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.056918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.057054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.057088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.057260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.057295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.057430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.057464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.057596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.057629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.057746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.057780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.057992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.058026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.058144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.058178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.058322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.058356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.058496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.058530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.058659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.058692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.058840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.058874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.058975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.059009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.059143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.059176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.059291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.059325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.059457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.059491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.059627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.059661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.059762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.059806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.059975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.060010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.060123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.060156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.060291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.060325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.060452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.060486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.060659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.060696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.060868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.060904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.061047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.061080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.061219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.061253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.061392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.061426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.061559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.061592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.061752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.061787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.061890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.061924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.062036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.062075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.062209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.062243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.062383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.062417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.062525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.062559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.062695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.062729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.062895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.062929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.063037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-12-06 01:15:19.063071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-12-06 01:15:19.063192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.063226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.063363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.063396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.063559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.063593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.063734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.063768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.063952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.063987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.064151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.064196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.064321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.064355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.064497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.064532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.064703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.064736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.064870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.064904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.065009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.065042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.065167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.065201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.065327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.065360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.065516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.065550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.065708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.065743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.065910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.065944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.066048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.066082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.066242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.066276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.066401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.066436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.066542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.066577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.066731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.066766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.066913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.066947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.067089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.067123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.067233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.067268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.067429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.067463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.067600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.067633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.067768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.067810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.067956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.067990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.068116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.068150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.068287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.068320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.068455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.068488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.068630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.068663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.068760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.068805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.068928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.068965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.069129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.069162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.069265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.069299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.069435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.069469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-12-06 01:15:19.069605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-12-06 01:15:19.069638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.069805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.069847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.069979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.070013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.070145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.070178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.070284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.070317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.070478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.070512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.070650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.070683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.070832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.070868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.071005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.071039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.071152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.071185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.071324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.071357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.071455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.071488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.071592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.071626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.071747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.071807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.072000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.072037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.072184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.072219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.072378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.072413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.072565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.072614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.072727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.072765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.072914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.072949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.073082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.073120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.073248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.073282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.073426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.073459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.073593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.073627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.073764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.073806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.073932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.073966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.074096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.074139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.074267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.074300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.074437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.074470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.074609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.074642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.074806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.074849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.074982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.075016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.075178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.075212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.075344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.075378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.075540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.075574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.075739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.075773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.075912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.075950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.076082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.076135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-12-06 01:15:19.076307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-12-06 01:15:19.076344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.076479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.076514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.076678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.076713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.076877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.076926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.077043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.077080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.077230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.077267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.077422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.077456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.077613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.077647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.077806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.077863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.078041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.078078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.078247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.078282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.078443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.078477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.078593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.078629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.078772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.078805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.078996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.079044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.079202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.079240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.079371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.079407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.079540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.079576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.079715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.079750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.079926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.079961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.080076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.080112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.080243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.080278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.080386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.080420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.080558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.080592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.080695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.080729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.080948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.081000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.081168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.081205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.081340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.081375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.081513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.081548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.081741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.081779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.081924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.081959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.082089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.082134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.082284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.082323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.082507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.082545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.082720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-12-06 01:15:19.082759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-12-06 01:15:19.082906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.082943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.083072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.083120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.083246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.083282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.083498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.083571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.083776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.083811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.083952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.083986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.084117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.084151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.084308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.084342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.084463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.084531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.084730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.084772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.084912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.084947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.085052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.085087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.085224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.085259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.085369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.085403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.085548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.085583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.085707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.085745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.085931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.085965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.086072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.086108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.086245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.086280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.086409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.086442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.086538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.086572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.086729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.086766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.086896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.086931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.087061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.087095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.087222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.087259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.087408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.087446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.087605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.087645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.087770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.087806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.087930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.087965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.088097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.088131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.088256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.088313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.088497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.088558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.088741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.088780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.088951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.088987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.089107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.089142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.089301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.089335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.089476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.089511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-12-06 01:15:19.089672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-12-06 01:15:19.089727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.089865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.089900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.090023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.090071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.090239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.090276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.090413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.090452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.090600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.090638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.090781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.090833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.090990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.091024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.091157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.091190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.091313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.091366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.091482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.091519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.091678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.091718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.091882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.091917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.092031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.092066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.092195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.092230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.092361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.092394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.092546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.092585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.092734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.092772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.092923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.092972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.093143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.093180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.093308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.093362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.093513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.093553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.093726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.093766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.093923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.093958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.094071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.094107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.094245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.094280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.094416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.094471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.094617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.094657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.094838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.094874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.094991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.095027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.095174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.095208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.095347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.095382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.095487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.095522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.095642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.095677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.095807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.095849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.095965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.096001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.096125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.096160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.096321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.096356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-12-06 01:15:19.096517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-12-06 01:15:19.096551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.096697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.096731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.096852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.096899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.097016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.097051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.097191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.097226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.097336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.097370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.097510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.097544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.097683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.097717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.097836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.097875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.097980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.098016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.098172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.098207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.098342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.098377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.098539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.098573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.098704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.098739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.098874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.098910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.099018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.099053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.099226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.099261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.099368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.099403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.099544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.099581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.099712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.099761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.099908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.099946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.100059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.100102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.100284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.100325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.100467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.100501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.100613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.100648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.100763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.100809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.100944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.100978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.101104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.101138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.101255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.101304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.101449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.101484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.101620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.101655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.101793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.101845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.101953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.101986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.102131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.102167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.102303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.102338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.102487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.102526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.102640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.102674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.102782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.102823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-12-06 01:15:19.102987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-12-06 01:15:19.103021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.103178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.103212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.103347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.103382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.103490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.103523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.103663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.103699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.103835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.103884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.104005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.104042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.104186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.104221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.104334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.104370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.104531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.104565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.104695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.104729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.104860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.104897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.105014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.105048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.105169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.105204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.105344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.105379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.105479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.105513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.105634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.105683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.105855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.105904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.106052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.106088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.106208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.106244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.106411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.106445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.106547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.106580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.106692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.106727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.106871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.106921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.107101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.107138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.107283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.107319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.107454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.107489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.107652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.107686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.107839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.107888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.108035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.108070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.108185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.108219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.108356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.108390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.108524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.108558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.108703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.108737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.108852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.108889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.109052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.109087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.109195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.109230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.109368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.109408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-12-06 01:15:19.109526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-12-06 01:15:19.109560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.109725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.109760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.109880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.109916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.110052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.110087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.110224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.110259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.110400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.110439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.110576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.110610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.110745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.110779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.110904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.110940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.111047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.111081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.111240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.111274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.111437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.111471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.111597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.111631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.111771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.111811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.111965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.112001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.112171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.112205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.112340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.112374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.112505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.112539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.112652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.112687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.112806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.112848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.112986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.113020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.113139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.113174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.113313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.113347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.113454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.113489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.113606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.113640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.113776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.113821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.113969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.114004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.114112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.114146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.114284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.114318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.114460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.114495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.114636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.114670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.114811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.114851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.114975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.115024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.115236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.115284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-12-06 01:15:19.115433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-12-06 01:15:19.115469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.115584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.115620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.115785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.115835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.115950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.115985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.116135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.116171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.116322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.116361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.116471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.116518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.116683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.116718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.116855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.116889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.116994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.117028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.117132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.117166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.117327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.117361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.117491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.117525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.117649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.117683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.117785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.117825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.117963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.117996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.118134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.118168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.118266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.118300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.118411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.118445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.118566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.118601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.118731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.118781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.118967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.119013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.119171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.119207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.119341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.119376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.119484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.119519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.119680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.119714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.119867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.119915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.120077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.120126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.120245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.120283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.120427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.120463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.120602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.120636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.120777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.120811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.120945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.120981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.121144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.121178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.121316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.121350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.121467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.121502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.121628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.121662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.121769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.121803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.121977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.122014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-12-06 01:15:19.122167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-12-06 01:15:19.122216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.122360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.122396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.122500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.122535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.122670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.122704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.122812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.122855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.122992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.123025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.123156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.123198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.123318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.123351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.123489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.123525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.123665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.123700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.123840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.123906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.124024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.124060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.124189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.124224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.124335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.124370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.124488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.124524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.124641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.124678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.124820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.124855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.124972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.125006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.125143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.125178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.125315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.125349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.125489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.125525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.125634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.125668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.125830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.125865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.126031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.126065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.126202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.126236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.126339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.126373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.126539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.126575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.126696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.126745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.126894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.126932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.127079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.127115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.127251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.127285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.127399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.127435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.127577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.127613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.127753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.127789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.127934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.127968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.128103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.128137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.128247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.128281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.128417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.128452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.128589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.128624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.128738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.128786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.128920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.128957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-12-06 01:15:19.129071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-12-06 01:15:19.129108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.129218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.129254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.129388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.129423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.129563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.129599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.129755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.129790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.129952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.129993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.130099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.130135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.130322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.130358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.130489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.130535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.130702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.130737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.130835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.130871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.131019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.131054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.131163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.131198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.131328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.131363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.131550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.131584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.131718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.131753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.131900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.131937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.132072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.132107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.132207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.132242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.132381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.132416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.132552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.132587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.132735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.132770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.132923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.132960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.133101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.133136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.133244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.133279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.133412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.133446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.133609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.133643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.133762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.133798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.133956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.133992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.134136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.134171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.134334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.134369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.134508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.134542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.134660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.134695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.134855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.134891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.135023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.135058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.135216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.135250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.135360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.135396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.135498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.135533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.135697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.135731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.135864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.135912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.136037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-12-06 01:15:19.136072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-12-06 01:15:19.136238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.136273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.136410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.136446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.136577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.136611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.136743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.136778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.136885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.136926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.137080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.137128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.137250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.137288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.137405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.137442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.137579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.137613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.137728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.137761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.137905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.137940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.138050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.138084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.138188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.138221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.138386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.138420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.138556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.138592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.138751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.138786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.138945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.138993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.139105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.139139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.139276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.139310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.139466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.139500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.139614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.139647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.139781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.139833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.139972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.140009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.140149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.140184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.140296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.140331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.140463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.140499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.140671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.140720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.140866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.140904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.141042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.141077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.141222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.141255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.141396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.141430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.141540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.141574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.141683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.141717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.141882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.141916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.142048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.142082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-12-06 01:15:19.142186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-12-06 01:15:19.142220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.142381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.142415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.142548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.142581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.142730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.142780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.142950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.142996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.143175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.143223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.143365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.143400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.143537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.143570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.143670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.143704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.143840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.143879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.144001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.144050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.144194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.144231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.144330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.144366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.144531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.144564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.144705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.144739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.144907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.144946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.145054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.145090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.145230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.145265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.145370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.145406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.145545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.145582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.145684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.145719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.145880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.145916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.146026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.146060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.146193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.146227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.146357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.146391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.146552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.146585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.146720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.146754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.146885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.146935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.147116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-12-06 01:15:19.147152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-12-06 01:15:19.147290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.147324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.147459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.147493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.147600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.147635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.147810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.147852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.147992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.148027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.148171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.148209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.148375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.148410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.148547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.148582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.148720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.148754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.148898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.148933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.149033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.149067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.149198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.149231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.149368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.149402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.149531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.149565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.149720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.149769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.149898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.149936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.150075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.150110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.150274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.150309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.150443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.150478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.150610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.150646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.150785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.150833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.150950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.150984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.151091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.151125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.151259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.151293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.151427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.151460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.151626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.151661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.151797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.151841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.151977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.152012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.152182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.152216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.152401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.152451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.152598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.152634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.152767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.152801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.152955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.152989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.153096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.153130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.153295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.153328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.153462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.153495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.153670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.153703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.153799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-12-06 01:15:19.153840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-12-06 01:15:19.153953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-12-06 01:15:19.153989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-12-06 01:15:19.154153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-12-06 01:15:19.154188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-12-06 01:15:19.154355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-12-06 01:15:19.154404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-12-06 01:15:19.154524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-12-06 01:15:19.154558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-12-06 01:15:19.154694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-12-06 01:15:19.154727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-12-06 01:15:19.154829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-12-06 01:15:19.154864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-12-06 01:15:19.154958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-12-06 01:15:19.154992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-12-06 01:15:19.155100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-12-06 01:15:19.155133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-12-06 01:15:19.155233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-12-06 01:15:19.155266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-12-06 01:15:19.155421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-12-06 01:15:19.155455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-12-06 01:15:19.155558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-12-06 01:15:19.155591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-12-06 01:15:19.155698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-12-06 01:15:19.155731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-12-06 01:15:19.155872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-12-06 01:15:19.155934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-12-06 01:15:19.156081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-12-06 01:15:19.156118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-12-06 01:15:19.156229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-12-06 01:15:19.156264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-12-06 01:15:19.156368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-12-06 01:15:19.156402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-12-06 01:15:19.156540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-12-06 01:15:19.156574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-12-06 01:15:19.156706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-12-06 01:15:19.156740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-12-06 01:15:19.156898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-12-06 01:15:19.156934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-12-06 01:15:19.157072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-12-06 01:15:19.157105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-12-06 01:15:19.157240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-12-06 01:15:19.157274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.157412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.157446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.157546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.157587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.157750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.157799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.157926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.157961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.158133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.158168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.158328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.158363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.158498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.158532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.158671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.158706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.158828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:46.807 [2024-12-06 01:15:19.158839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.158876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.159011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.159045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.159151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.159186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.159321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.159356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.159470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.159505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.159642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.159677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.159808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.159871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.160018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.160054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.160166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.160201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.160310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.160345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.160454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.160488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.160596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.160629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.160727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.160762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.160905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.160940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.161040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.161074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.161176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.161211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.161348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.161382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.161495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.161529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.161640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.161675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.161823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.161857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.161969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.807 [2024-12-06 01:15:19.162004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.807 qpair failed and we were unable to recover it. 00:37:46.807 [2024-12-06 01:15:19.162115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.162149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.162264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.162298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.162434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.162467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.162568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.162604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.162719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.162753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.162886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.162921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.163023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.163057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.163189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.163224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.163329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.163365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.163467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.163501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.163661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.163694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.163804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.163845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.164007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.164042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.164154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.164189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.164307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.164341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.164476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.164511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.164649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.164683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.164826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.164861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.164993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.165027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.165181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.165230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.165378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.165416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.165553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.165589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.165752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.165786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.165948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.165995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.166137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.166184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.166354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.166395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.166536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.166570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.166680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.166714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.166827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.166862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.166973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.167008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.167154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.167188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.167324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.167358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.167519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.167553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.167704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.167738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.167875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.167932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.168057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.168093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.168195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.168228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.168365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.808 [2024-12-06 01:15:19.168398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.808 qpair failed and we were unable to recover it. 00:37:46.808 [2024-12-06 01:15:19.168536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.168570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.168708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.168742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.168900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.168934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.169037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.169070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.169180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.169213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.169338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.169372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.169511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.169544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.169657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.169691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.169856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.169890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.170046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.170095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.170243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.170280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.170417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.170452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.170587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.170621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.170729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.170763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.170944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.170993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.171140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.171176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.171302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.171337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.171464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.171499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.171614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.171662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.171827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.171863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.171960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.171994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.172102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.172136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.172275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.172319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.172433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.172467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.172583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.172617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.172793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.172835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.173009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.173045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.173209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.173248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.173344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.173378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.173482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.173516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.173637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.173686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.173807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.173858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.174000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.174035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.174139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.174182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.174320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.174355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.174466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.174502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.174612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.174647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.174779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.809 [2024-12-06 01:15:19.174826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.809 qpair failed and we were unable to recover it. 00:37:46.809 [2024-12-06 01:15:19.174963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.174997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.175136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.175170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.175283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.175317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.175435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.175472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.175581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.175616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.175727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.175761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.175880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.175915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.176057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.176091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.176198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.176231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.176367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.176401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.176509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.176545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.176682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.176716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.176837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.176873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.177013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.177048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.177177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.177211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.177321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.177355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.177467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.177502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.177641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.177674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.177827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.177861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.177996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.178030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.178217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.178251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.178360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.178393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.178495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.178528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.178691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.178725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.178915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.178965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.179079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.179115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.179253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.179287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.179421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.179456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.179619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.179652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.179788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.179835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.179994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.180027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.180166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.180200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.180311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.180345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.180577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.180610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.180718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.180751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.180872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.180907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.181015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.181049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.181163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.810 [2024-12-06 01:15:19.181197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.810 qpair failed and we were unable to recover it. 00:37:46.810 [2024-12-06 01:15:19.181313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.181347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.181500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.181536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.181678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.181723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.181848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.181883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.181994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.182029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.182208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.182241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.182370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.182405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.182536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.182571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.182707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.182740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.182890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.182923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.183085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.183119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.183225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.183260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.183396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.183429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.183539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.183574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.183714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.183748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.183893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.183928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.184088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.184128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.184241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.184274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.184414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.184453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.184561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.184595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.184754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.184788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.184945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.184979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.185077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.185121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.185228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.185263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.185407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.185440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.185573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.185608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.185753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.185811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.185970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.186007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.186130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.186165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.186328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.186363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.186500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.186534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.186661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.186695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.186822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.811 [2024-12-06 01:15:19.186857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.811 qpair failed and we were unable to recover it. 00:37:46.811 [2024-12-06 01:15:19.186960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.186994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.187135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.187170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.187305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.187339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.187469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.187503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.187612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.187648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.187782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.187833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.187970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.188004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.188103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.188137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.188294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.188328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.188435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.188468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.188584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.188620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.188743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.188803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.188949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.188986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.189118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.189153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.189314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.189349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.189516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.189551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.189666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.189701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.189852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.189894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.189999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.190034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.190177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.190215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.190322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.190356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.190486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.190520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.190682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.190716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.190884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.190933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.191054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.191101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.191250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.191290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.191426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.191460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.191563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.191597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.191728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.191762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.191891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.191927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.192061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.192095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.192206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.192241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.192353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.192387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.192522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.192556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.192691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.192725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.192856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.192891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.193025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.193059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.812 [2024-12-06 01:15:19.193182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.812 [2024-12-06 01:15:19.193216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.812 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.193317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.193351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.193457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.193491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.193624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.193658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.193788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.193843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.193980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.194014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.194129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.194163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.194325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.194358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.194490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.194523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.194640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.194689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.194835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.194870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.194985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.195018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.195160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.195194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.195341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.195376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.195489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.195523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.195675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.195710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.195843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.195893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.196013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.196050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.196172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.196208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.196341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.196376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.196535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.196569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.196706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.196740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.196884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.196920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.197028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.197062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.197210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.197244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.197344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.197378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.197512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.197545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.197665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.197714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.197851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.197891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.198008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.198042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.198153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.198186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.198316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.198350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.813 [2024-12-06 01:15:19.198462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.813 [2024-12-06 01:15:19.198496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.813 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.198637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.198672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.198805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.198865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.199007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.199044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.199161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.199196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.199359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.199393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.199534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.199569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.199699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.199733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.199880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.199915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.200054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.200088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.200226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.200260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.200366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.200401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.200563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.200597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.200732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.200767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.200930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.200979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.201127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.201164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.201275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.201309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.201425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.201460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.201591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.201625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.201735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.201768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.201910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.201945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.202077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.202119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.202248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.202282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.202423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.202457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.202589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.202623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.202758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.202802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.202947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.202982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.203117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.203150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.203291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.203325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.203467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.203502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.203637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.203671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.203828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.203862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.203996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.204030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.204175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.204210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.204324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.204358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.204491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.204527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.204689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.204727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.204850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.204885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.814 [2024-12-06 01:15:19.204996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.814 [2024-12-06 01:15:19.205031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.814 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.205239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.205273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.205404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.205439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.205577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.205612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.205722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.205756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.205910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.205944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.206106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.206139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.206242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.206276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.206405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.206440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.206570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.206603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.206739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.206774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.206888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.206923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.207040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.207076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.207178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.207223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.207330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.207364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.207522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.207555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.207690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.207724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.207824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.207858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.207965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.207999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.208159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.208193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.208329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.208363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.208498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.208532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.208667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.208700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.208811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.208851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.208950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.208984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.209153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.209187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.209291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.209326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.209431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.209466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.209600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.209635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.209788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.209848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.209997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.210031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.210173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.210207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.210368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.210402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.210510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.210544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.210667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.210701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.210833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.210882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.211054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.211091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.211203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.211238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.815 [2024-12-06 01:15:19.211399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.815 [2024-12-06 01:15:19.211438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.815 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.211573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.211608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.211749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.211783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.211893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.211927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.212061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.212095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.212257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.212290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.212448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.212482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.212616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.212656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.212769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.212802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.212944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.212978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.213137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.213170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.213302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.213335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.213493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.213527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.213639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.213673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.213828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.213862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.213997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.214031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.214129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.214162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.214320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.214354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.214455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.214488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.214589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.214622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.214735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.214768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.214975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.215024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.215185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.215234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.215388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.215424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.215590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.215624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.215760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.215793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.215911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.215945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.216087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.216122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.216227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.216260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.216404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.216438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.216562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.216597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.216732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.216766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.216873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.216908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.217040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.217075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.816 [2024-12-06 01:15:19.217188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.816 [2024-12-06 01:15:19.217221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.816 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.217356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.217390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.217491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.217524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.217631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.217665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.217841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.217875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.217981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.218015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.218144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.218182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.218319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.218352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.218485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.218518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.218621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.218656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.218790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.218831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.218981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.219031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.219190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.219241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.219360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.219407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.219522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.219557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.219667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.219712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.219827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.219862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.219956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.219990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.220144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.220179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.220295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.220329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.220436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.220470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.220579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.220614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.220766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.220809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.220966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.221001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.221179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.221219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.221334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.221370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.221503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.221538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.221672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.221707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.221831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.221867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.222009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.222043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.222156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.222196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.222358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.222393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.222537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.222573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.222746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.222795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.222951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.222988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.223099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.223136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.223241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.223275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.223441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.223476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.817 [2024-12-06 01:15:19.223605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.817 [2024-12-06 01:15:19.223640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.817 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.223791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.223857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.224028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.224063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.224225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.224260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.224400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.224435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.224567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.224601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.224735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.224769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.224940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.224974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.225080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.225128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.225269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.225303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.225440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.225475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.225610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.225644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.225808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.225868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.226022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.226058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.226201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.226236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.226369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.226403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.226516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.226549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.226679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.226712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.226880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.226916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.227069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.227103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.227247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.227281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.227417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.227451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.227625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.227659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.227793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.227844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.227977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.228011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.228172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.228213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.228345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.228379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.228513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.228547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.228655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.228690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.228831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.228866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.228999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.229033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.229193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.229243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.229389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.229426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.229543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.229578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.229691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.229727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.229877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.229927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.230083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.230131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.230304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.230340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.230456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.818 [2024-12-06 01:15:19.230491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.818 qpair failed and we were unable to recover it. 00:37:46.818 [2024-12-06 01:15:19.230597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.230630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.230732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.230767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.230912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.230962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.231099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.231150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.231289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.231325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.231503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.231538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.231678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.231712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.231813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.231854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.232029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.232064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.232199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.232240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.232387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.232422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.232562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.232610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.232753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.232788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.232927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.232976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.233100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.233144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.233266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.233301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.233437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.233471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.233596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.233636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.233774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.233833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.233976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.234012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.234171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.234228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.234373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.234409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.234517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.234551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.234704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.234740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.234880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.234915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.235020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.235054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.235215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.235249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.235387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.235420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.235559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.235593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.235729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.235763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.235916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.235950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.236057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.236091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.236242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.236276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.236409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.236443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.236617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.236666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.236810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.236861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.237017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.237062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.237223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.819 [2024-12-06 01:15:19.237264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.819 qpair failed and we were unable to recover it. 00:37:46.819 [2024-12-06 01:15:19.237430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.237470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.237627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.237664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.237809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.237851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.237985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.238019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.238168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.238202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.238363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.238398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.238571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.238606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.238749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.238795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.238943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.238978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.239141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.239174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.239339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.239374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.239508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.239547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.239688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.239722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.239853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.239888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.240002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.240037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.240177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.240211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.240325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.240359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.240529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.240563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.240705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.240739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.240849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.240884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.241018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.241052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.241159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.241193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.241301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.241335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.241470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.241504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.241642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.241677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.241836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.241871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.242004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.242038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.242174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.242216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.242329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.242362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.242506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.242540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.242686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.242720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.242833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.242868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.242969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.243003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.243168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.243203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.243305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.243339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.820 [2024-12-06 01:15:19.243438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.820 [2024-12-06 01:15:19.243472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.820 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.243599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.243633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.243745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.243779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.243944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.243993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.244162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.244218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.244338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.244375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.244514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.244549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.244689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.244725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.244876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.244912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.245047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.245082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.245185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.245220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.245365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.245400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.245537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.245572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.245705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.245740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.245887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.245924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.246040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.246075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.246213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.246253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.246389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.246425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.246593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.246639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.246800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.246844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.246953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.246987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.247121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.247155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.247291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.247325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.247432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.247467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.247592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.247625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.247786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.247826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.247939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.247974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.248108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.248142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.248252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.248287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.248413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.248446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.248585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.248622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.248753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.248812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.248951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.248986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.249132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.249175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.249275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.249309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.249471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.249505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.249623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.249658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-12-06 01:15:19.249789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-12-06 01:15:19.249837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.249946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.249980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.250079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.250123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.250256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.250291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.250429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.250464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.250565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.250598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.250708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.250744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.250913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.250949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.251102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.251140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.251311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.251355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.251463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.251498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.251659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.251694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.251856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.251891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.252023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.252058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.252230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.252266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.252374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.252409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.252571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.252606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.252729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.252785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.252927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.252963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.253078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.253125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.253290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.253324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.253494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.253528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.253689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.253724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.253832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.253867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.253974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.254008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.254167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.254201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.254308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.254342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.254477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.254511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.254612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.254646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.254754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.254788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.254927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.254961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.255106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.255140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.255240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.255285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.255443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.255479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.255589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.255622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.255765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.255799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.255950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.255984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.256097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.256140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-12-06 01:15:19.256251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-12-06 01:15:19.256284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.256423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.256457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.256620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.256654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.256756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.256790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.256908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.256943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.257043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.257077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.257199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.257233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.257390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.257424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.257606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.257652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.257828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.257877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.258062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.258110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.258265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.258301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.258465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.258499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.258638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.258672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.258811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.258858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.258992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.259025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.259160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.259197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.259295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.259329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.259470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.259503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.259606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.259640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.259776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.259812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.259929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.259968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.260107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.260142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.260286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.260320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.260480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.260514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.260676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.260710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.260856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.260905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.261047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.261083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.261250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.261286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.261411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.261447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.261584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.261618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.261784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.261845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.261994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.262028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.262135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.262168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.262300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.262334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.262448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.262482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.262599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.262633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.262778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.262819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-12-06 01:15:19.262967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-12-06 01:15:19.263002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.263137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.263172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.263306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.263340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.263498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.263532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.263665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.263700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.263836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.263870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.264001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.264035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.264186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.264220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.264386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.264420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.264558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.264592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.264744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.264788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.264942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.264991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.265136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.265182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.265322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.265356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.265489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.265523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.265638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.265680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.265811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.265859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.266023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.266057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.266195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.266229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.266341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.266374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.266483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.266517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.266661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.266695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.266832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.266866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.267010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.267044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.267196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.267246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.267423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.267470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.267584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.267620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.267759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.267794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.267939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.267974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.268126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.268161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.268301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.268336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.268443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.268477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.268613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.268648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.268748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.268787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.268940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.268989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.269160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.269196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.269308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.269345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.269490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.269525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-12-06 01:15:19.269635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-12-06 01:15:19.269669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.269777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.269834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.269968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.270003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.270136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.270171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.270335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.270371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.270507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.270541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.270656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.270691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.270829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.270865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.271004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.271039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.271177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.271211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.271348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.271382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.271527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.271563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.271697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.271737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.271903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.271952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.272071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.272106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.272242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.272276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.272406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.272441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.272586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.272621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.272727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.272765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.272920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.272956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.273065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.273100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.273261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.273296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.273432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.273467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.273608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.273643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.273744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.273779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.273905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.273939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.274081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.274116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.274261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.274297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.274461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.274498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.274617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.274652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.274794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.274838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.275010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.275045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.275157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.275191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.275336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.275371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.275499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-12-06 01:15:19.275546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-12-06 01:15:19.275657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.275692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.275858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.275893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.276031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.276066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.276204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.276239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.276372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.276407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.276544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.276580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.276746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.276781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.276900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.276935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.277071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.277110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.277255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.277289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.277423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.277457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.277567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.277601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.277735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.277770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.277919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.277955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.278063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.278098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.278201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.278236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.278345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.278379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.278490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.278529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.278673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.278707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.278847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.278882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.278991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.279026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.279178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.279226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.279359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.279397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.279535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.279570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.279681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.279715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.279855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.279890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.279999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.280033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.280138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.280172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.280309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.280342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.280476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.280510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.280619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.280655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.280804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.280847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.281002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.281036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.281220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.281254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.281401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.281447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.281598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.281631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.281742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.281775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.281904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.281938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-12-06 01:15:19.282075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-12-06 01:15:19.282109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.282257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.282291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.282424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.282458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.282618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.282652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.282788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.282843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.282946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.282980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.283148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.283193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.283331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.283366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.283505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.283540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.283674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.283707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.283880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.283929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.284082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.284119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.284246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.284282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.284414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.284448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.284616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.284650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.284761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.284795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.284928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.284965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.285104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.285145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.285287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.285329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.285446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.285485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.285600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.285633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.285773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.285821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.285960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.285995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.286128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.286162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.286310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.286345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.286453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.286488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.286592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.286626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.286766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.286800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.286924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.286960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.287070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.287115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.287256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.287290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.287440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.287474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.287581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.287615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.287716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.287755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.287909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.287944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.288048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.288081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.288198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.288237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.288374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-12-06 01:15:19.288408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-12-06 01:15:19.288533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.288574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.288698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.288742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.288889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.288923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.289061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.289095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.289247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.289280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.289381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.289415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.289559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.289592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.289739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.289772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.289895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.289930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.290064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.290097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.290265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.290299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.290426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.290460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.290621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.290657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.290761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.290796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.290952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.290986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.291095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.291129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.291299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.291333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.291461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.291494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.291615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.291649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.291826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.291860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.292003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.292036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.292206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.292252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.292386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.292420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.292542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.292575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.292719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.292752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.292882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.292916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.293048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.293081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.293249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.293283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.293447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.293480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.293514] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:46.828 [2024-12-06 01:15:19.293580] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:46.828 [2024-12-06 01:15:19.293608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:46.828 [2024-12-06 01:15:19.293623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.293632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:46.828 [2024-12-06 01:15:19.293655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f0[2024-12-06 01:15:19.293657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:46.828 0 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.293830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.293864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.293983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.294016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.294154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.294188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.294345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.294379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.294480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.294513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.294644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-12-06 01:15:19.294678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-12-06 01:15:19.294845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.294880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.294991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.295024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.295147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.295192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.295328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.295362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.295497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.295531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.295672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.295706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.295809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.295851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.295992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.296026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.296161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.296195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.296337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.296381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.296368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:46.829 [2024-12-06 01:15:19.296399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:46.829 [2024-12-06 01:15:19.296445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:46.829 [2024-12-06 01:15:19.296449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:46.829 [2024-12-06 01:15:19.296493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.296527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.296643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.296677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.296784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.296854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.296961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.296995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.297147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.297207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.297334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.297370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.297533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.297567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.297683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.297717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.297862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.297897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.298042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.298077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.298202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.298237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.298352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.298387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.298525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.298575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.298720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.298755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.298892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.298926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.299034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.299067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.299191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.299224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.299337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.299371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.299481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.299514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.299626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.299671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.299824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.299860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.299973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.300010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.300124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.300159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.300266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.300301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.300435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.300470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.300622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.300662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-12-06 01:15:19.300771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-12-06 01:15:19.300819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.300932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.300966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.301077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.301126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.301241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.301278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.301389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.301424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.301535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.301569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.301703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.301737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.301879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.301913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.302033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.302067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.302185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.302221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.302349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.302387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.302503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.302539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.302694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.302728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.302889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.302924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.303028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.303062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.303200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.303234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.303341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.303376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.303514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.303548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.303675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.303709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.303856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.303891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.304010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.304044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.304154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.304189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.304313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.304358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.304501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.304535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.304641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.304675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.304820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.304856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.304977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.305013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.305148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.305188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.305329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.305373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.305517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.305551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.305655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.305688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.305801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.305845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.305951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.305987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.306121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.306157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.306301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-12-06 01:15:19.306335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-12-06 01:15:19.306489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.306523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.306631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.306665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.306847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.306898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.307015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.307050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.307203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.307249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.307390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.307424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.307536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.307569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.307677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.307711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.307852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.307887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.308034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.308068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.308234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.308268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.308389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.308423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.308561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.308595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.308738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.308772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.308907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.308942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.309040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.309074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.309216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.309250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.309363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.309397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.309546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.309580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.309691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.309725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.309860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.309894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.309996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.310029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.310151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.310194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.310356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.310389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.310545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.310578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.310708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.310743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.310896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.310945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.311094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.311142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.311285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.311331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.311451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.311497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.311644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.311679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.311829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.311878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.312011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.312046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.312230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.312263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.312376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.312409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.312512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.312546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.312653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.312687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.312855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.312892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-12-06 01:15:19.313011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-12-06 01:15:19.313046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.313166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.313201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.313333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.313367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.313504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.313539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.313641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.313687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.313828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.313863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.313971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.314010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.314164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.314199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.314324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.314360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.314476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.314510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.314656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.314690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.314799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.314843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.314953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.314988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.315094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.315130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.315267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.315303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.315411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.315446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.315581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.315615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.315747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.315780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.315941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.315976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.316088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.316123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.316255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.316292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.316398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.316432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.316535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.316570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.316679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.316712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.316851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.316886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.317021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.317055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.317186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.317219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.317366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.317400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.317541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.317575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.317692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.317725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.317858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.317892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.318023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.318056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.318163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.318196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.318331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.318376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.318513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.318553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.318695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.318752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.318888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.318923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.319059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.319092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-12-06 01:15:19.319219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-12-06 01:15:19.319253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.319413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.319448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.319557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.319601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.319737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.319771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.319889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.319923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.320058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.320091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.320198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.320232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.320386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.320420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.320554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.320587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.320695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.320729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.320872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.320922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.321069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.321128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.321285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.321322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.321430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.321466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.321603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.321639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.321749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.321784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.321921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.321956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.322067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.322104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.322213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.322247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.322382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.322416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.322553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.322587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.322721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.322764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.322896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.322933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.323045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.323080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.323223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.323258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.323418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.323453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.323573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.323607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.323751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.323786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.323937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.323972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.324082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.324116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.324269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.324303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.324439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.324473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.324606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.324640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.324745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.324779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.324907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.324943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.325099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.325158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.325303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.325340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.325478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.325514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-12-06 01:15:19.325619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-12-06 01:15:19.325654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.325795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.325847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.325963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.326000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.326114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.326149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.326265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.326299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.326435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.326470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.326576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.326610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.326731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.326765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.326945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.326980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.327098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.327144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.327264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.327306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.327423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.327457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.327609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.327645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.327750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.327784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.327923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.327958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.328078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.328134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.328251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.328287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.328398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.328434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.328547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.328588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.328723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.328757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.328915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.328964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.329143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.329179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.329300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.329346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.329454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.329489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.329600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.329636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.329738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.329773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.329895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.329930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.330053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.330088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.330213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.330247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.330387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.330424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.330567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.330601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.330707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.330742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.330861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.330895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.331002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.331036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.331204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.331238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-12-06 01:15:19.331349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-12-06 01:15:19.331384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.331517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.331552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.331663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.331712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.331828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.331864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.331993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.332027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.332145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.332180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.332290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.332324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.332464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.332498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.332627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.332660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.332767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.332800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.332970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.333019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.333161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.333208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.333332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.333371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.333477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.333512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.333620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.333654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.333764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.333799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.333933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.333968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.334081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.334115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.334295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.334331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.334440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.334474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.334580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.334614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.334757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.334791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.334911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.334945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.335108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.335151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.335286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.335320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.335426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.335460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.335615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.335649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.335762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.335797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.335942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.335976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.336086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.336125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.336263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.336297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.336440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.336475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.336611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.336646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.336783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.336834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.336939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.336974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.337078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.337119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.337234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.337269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.337398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-12-06 01:15:19.337432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-12-06 01:15:19.337538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.337571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.337723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.337772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.337905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.337942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.338051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.338085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.338219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.338259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.338365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.338411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.338519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.338554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.338715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.338750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.338926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.338975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.339089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.339133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.339247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.339282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.339413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.339447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.339578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.339611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.339715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.339749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.339872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.339909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.340021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.340055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.340173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.340210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.340323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.340358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.340548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.340582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.340688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.340722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.340845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.340879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.340987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.341020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.341183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.341216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.341328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.341363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.341468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.341502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.341617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.341654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.341778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.341835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.341955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.341992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.342104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.342147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.342253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.342288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.342397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.342431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.342550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.342584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.342695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.342729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.342846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.342880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.342988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.343021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.343164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.343198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.343300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.343333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.343466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-12-06 01:15:19.343501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-12-06 01:15:19.343624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.343674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.343827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.343865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.343975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.344009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.344125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.344159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.344302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.344335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.344439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.344472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.344574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.344612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.344708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.344742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.344874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.344912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.345056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.345091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.345241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.345276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.345381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.345416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.345546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.345581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.345711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.345745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.345861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.345895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.346032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.346066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.346185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.346219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.346318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.346351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.346452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.346485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.346594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.346629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.346778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.346823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.346960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.346994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.347096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.347134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.347255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.347289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.347424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.347459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.347567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.347601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.347718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.347754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.347892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.347941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.348056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.348093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.348209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.348244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.348357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.348392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.348496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.348530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.348645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.348680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.348832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.348869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.348982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.349017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.349152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.349186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.349317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.349351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.349459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.349493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-12-06 01:15:19.349603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-12-06 01:15:19.349640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.349812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.349869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.350004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.350040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.350189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.350224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.350332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.350366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.350472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.350505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.350633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.350667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.350820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.350870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.351012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.351065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.351267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.351304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.351419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.351487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.351593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.351627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.351760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.351794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.351922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.351957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.352067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.352101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.352243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.352277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.352384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.352418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.352547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.352580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.352706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.352755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.352947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.352985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.353105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.353142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.353293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.353328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.353469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.353504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.353638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.353672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.353827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.353863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.353970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.354004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.354138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.354171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.354279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.354313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.354476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.354510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.354626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.354661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.354789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.354849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.354970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.355008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.355144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.355179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.355312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.355347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.355496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.355545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.355692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.355729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.355870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.355918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.356059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.356096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-12-06 01:15:19.356212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-12-06 01:15:19.356246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.356408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.356442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.356543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.356579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.356686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.356721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.356859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.356894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.356996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.357030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.357138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.357172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.357322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.357371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.357499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.357534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.357640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.357673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.357783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.357839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.357952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.357988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.358096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.358141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.358274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.358309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.358427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.358466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.358605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.358641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.358744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.358778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.358910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.358946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.359056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.359091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.359201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.359235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.359338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.359372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.359509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.359543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.359670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.359718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.359861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.359897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.360038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.360073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.360192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.360226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.360388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.360422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.360528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.360563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.360670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.360706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.360840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.360888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.361010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.361046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.361167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.361201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.361314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.361349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.361450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.361484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.361589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-12-06 01:15:19.361624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-12-06 01:15:19.361737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.361771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.361891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.361925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.362042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.362077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.362197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.362232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.362341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.362374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.362505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.362538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.362641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.362675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.362779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.362828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.362935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.362969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.363113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.363162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.363278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.363315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.363453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.363490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.363602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.363638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.363747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.363781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.363895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.363931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.364069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.364112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.364223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.364257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.364382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.364415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.364526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.364560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.364695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.364728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.364877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.364915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.365026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.365061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.365179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.365213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.365334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.365368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.365504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.365538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.365669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.365703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.365836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.365870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.365979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.366015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.366160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.366195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.366340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.366375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.366481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.366514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.366632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.366666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.366772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.366806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.366982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.367016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.367130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.367163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.367280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.367314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.367426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.367462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.367573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-12-06 01:15:19.367607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-12-06 01:15:19.367716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.367750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.367861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.367896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.368009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.368044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.368158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.368191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.368306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.368341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.368454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.368487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.368592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.368625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.368729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.368763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.368897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.368931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.369059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.369112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.369253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.369288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.369395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.369430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.369531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.369565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.369716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.369750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.369862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.369896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.370038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.370073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.370194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.370228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.370361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.370400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.370515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.370549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.370677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.370727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.370907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.370945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.371057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.371104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.371229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.371264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.371401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.371447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.371565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.371601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.371734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.371783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.371920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.371956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.372065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.372103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.372203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.372237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.372397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.372431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.372540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.372576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.372695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.372731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.372852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.372887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.373033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.373068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.373173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-12-06 01:15:19.373208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-12-06 01:15:19.373318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.373352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.373489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.373524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.373659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.373694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.373875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.373910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.374019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.374052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.374183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.374217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.374318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.374351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.374504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.374540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.374675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.374725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.374885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.374934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.375065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.375104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.375254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.375289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.375406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.375439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.375574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.375607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.375715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.375748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.375870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.375904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.376018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.376051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.376191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.376225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.376322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.376356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.376466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.376500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.376638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.376672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.376788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.376829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.376958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.376997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.377107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.377141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.377242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.377275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.377414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.377448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.377560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.377594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.377721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.377755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.377891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.377925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.378037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.378071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.378227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.378260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.378371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.378405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.378563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.378596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.378696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.378729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.378854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.378887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.378992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.379028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.379167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.379200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.379308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-12-06 01:15:19.379341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-12-06 01:15:19.379479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.379512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.379616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.379649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.379756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.379789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.379915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.379949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.380090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.380140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.380259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.380297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.380443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.380479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.380581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.380615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.380724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.380757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.380919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.380968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.381118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.381155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.381291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.381338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.381469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.381504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.381617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.381654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.381791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.381836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.381997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.382039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.382175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.382209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.382357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.382392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.382498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.382532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.382639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.382673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.382785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.382827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.382937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.382970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.383096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.383130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.383236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.383268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.383382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.383420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.383527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.383559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.383667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.383701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.383829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.383863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.383998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.384031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.384169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.384203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.384310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-12-06 01:15:19.384344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-12-06 01:15:19.384479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.384513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.384649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.384682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.384794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.384837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.384957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.384994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.385131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.385165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.385271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.385305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.385425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.385459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.385576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.385610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.385744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.385778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.385897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.385931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.386073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.386122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.386273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.386321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.386463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.386499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.386604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.386639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.386810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.386852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.386988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.387031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.387142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.387177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.387291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.387324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.387458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.387492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.387628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.387661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.387782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.387834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.387987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.388023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.388170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.388206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.388308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.388342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.388454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.388488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.388621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.388656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.388784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.388827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.388935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.388969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.389083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.389117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.389231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.389266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.389383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.389432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.389577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.389611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.389736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.389785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.389917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.389961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.390076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.390113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.390262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.390297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.390433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.390468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.390682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-12-06 01:15:19.390717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-12-06 01:15:19.390830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.390864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.390963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.390997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.391106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.391141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.391266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.391302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.391443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.391479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.391617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.391652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.391769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.391806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.391931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.391966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.392080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.392114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.392222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.392256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.392361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.392396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.392498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.392532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.392668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.392704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.392850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.392885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.393000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.393035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.393137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.393171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.393283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.393319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.393431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.393466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.393576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.393612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.393750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.393785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.393932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.393967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.394069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.394103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.394261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.394305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.394441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.394481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.394613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.394649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.394780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.394821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.394960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.395009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.395155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.395191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.395318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.395352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.395481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.395517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.395635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.395668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.395829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.395879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.396040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.396078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.396190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.396224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-12-06 01:15:19.396374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-12-06 01:15:19.396410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.396536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.396595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.396721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.396758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.396873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.396908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.397029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.397062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.397167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.397199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.397337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.397371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.397485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.397521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.397662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.397697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.397850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.397899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.398020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.398058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.398174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.398209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.398319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.398353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.398460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.398495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.398633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.398666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.398784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.398826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.398936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.398970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.399075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.399108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.399224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.399259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.399364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.399397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.399539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.399574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.399715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.399748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.399870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.399907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.400037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.400072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.400199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.400248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.400367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.400404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.400543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.400578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.400708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.400743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.400858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.400893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.401027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.401060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.401170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.401202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.401337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.401371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.401505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.401537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.401648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.401684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.401800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.401844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.401983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.402019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.402153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.402189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.402351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.402385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-12-06 01:15:19.402496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-12-06 01:15:19.402530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.402647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.402682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.402829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.402866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.402974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.403008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.403158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.403192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.403298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.403333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.403495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.403529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.403636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.403670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.403810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.403869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.404014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.404050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.404180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.404214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.404368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.404402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.404512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.404545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.404771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.404809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.404964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.404999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.405170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.405204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.405334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.405368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.405487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.405523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.405630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.405664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.405774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.405809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.405926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.405959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.406068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.406101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.406208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.406241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.406354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.406388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.406496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.406531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.406680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.406716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.406827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.406862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.406968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.407003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.407110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.407145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.407282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.407316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.407425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.407465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.407573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.407607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.407737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.407786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.407918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.407956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.408065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.408100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.408235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-12-06 01:15:19.408270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-12-06 01:15:19.408404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.408439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.408550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.408585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.408723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.408758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.408907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.408942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.409044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.409085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.409217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.409251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.409350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.409384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.409519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.409553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.409689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.409723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.409831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.409864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.409974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.410011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.410133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.410181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.410321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.410356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.410463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.410496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.410626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.410660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.410768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.410801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.410964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.410997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.411215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.411251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.411363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.411399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.411507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.411543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.411656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.411691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.411807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.411846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.411952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.411984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.412097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.412131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.412236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.412270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.412405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.412439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.412577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.412611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.412713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.412747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.412917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.412966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.413084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.413119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.413256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.413289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.413397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.413432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.413563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-12-06 01:15:19.413596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-12-06 01:15:19.413697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.413730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.413839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.413880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.414024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.414063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.414201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.414237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.414345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.414380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.414491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.414526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.414662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.414697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.414810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.414851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.414992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.415035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.415174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.415209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.415327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.415361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.415496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.415530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.415630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.415663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.415768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.415801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.415919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.415952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.416078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.416112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.416225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.416258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.416374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.416407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.416524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.416558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.416666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.416698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.416830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.416863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.416974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.417008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.417152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.417201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.417334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.417385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.417550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.417586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.417800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.417844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.417971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.418029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.418166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.418203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.418317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.418352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.418460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.418493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.418601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.418635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.418794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.418837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.418959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.419008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.419140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.419177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.419394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.419430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-12-06 01:15:19.419543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-12-06 01:15:19.419579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.419713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.419748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.419852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.419887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.420028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.420075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.420213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.420247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.420386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.420419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.420521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.420559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.420667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.420701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.420803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.420847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.420985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.421027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.421131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.421164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.421273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.421307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.421438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.421471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.421598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.421632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.421749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.421786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.421918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.421966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.422085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.422121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.422228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.422263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.422381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.422414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.422522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.422556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.422681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.422716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.422866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.422916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.423047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.423086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.423225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.423261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.423401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.423435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.423650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.423685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.423795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.423836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.423945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.423979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.424095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.424129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.424230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.424264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.424373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.424407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.424536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.424571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.424721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.424770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.424920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.424956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.425068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.425102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.425239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.425272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.425411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.425444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.425548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.425582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.425727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.425761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.425896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.425944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-12-06 01:15:19.426078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-12-06 01:15:19.426114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.426225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.426260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.426394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.426427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.426591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.426624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.426739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.426789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.426912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.426948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.427053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.427091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.427198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.427232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.427342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.427377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.427483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.427516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.427644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.427678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.427790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.427837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.427987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.428022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.428151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.428199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.428316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.428352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.428462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.428496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.428603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.428637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.428749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.428784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.428928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.428962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.429097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.429133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.429282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.429318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.429451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.429486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.429626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.429661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.429793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.429838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.429971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.430006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.430114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.430149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.430258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.430292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.430402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.430441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.430542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.430577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.430730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.430779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.430920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.430967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.431089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.431125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.431264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.431297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.431432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.431465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.431565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.431600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.431709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.431744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.431885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.431934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.432049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.432085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.432193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.432227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.432338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.432372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.432482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.432517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-12-06 01:15:19.432619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-12-06 01:15:19.432654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.432792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.432830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.432936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.432969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.433080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.433112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.433240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.433288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.433410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.433453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.433565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.433601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.433710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.433746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.433860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.433894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.434000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.434033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.434166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.434199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.434304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.434337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.434443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.434477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.434582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.434617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.434757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.434790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.434917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.434954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.435063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.435098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.435232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.435266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.435374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.435409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.435528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.435562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.435667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.435702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.435807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.435848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.435981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.436017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.436128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.436161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.436273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.436307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.436412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.436447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.436582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.436616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.436740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.436788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.437020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.437056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.437160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.437195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.437330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.437365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.437525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.437559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.437669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.437704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.437844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.437880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.438012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.438047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.438174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.438208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-12-06 01:15:19.438311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-12-06 01:15:19.438347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.438454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.438487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.438587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.438622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.438723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.438756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.438919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.438968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.439116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.439151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.439255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.439289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.439392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.439426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.439565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.439600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.439753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.439808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.439934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.439969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.440076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.440109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.440224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.440265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.440370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.440404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.440515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.440548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.440663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.440697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.440833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.440867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.440978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.441012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.441133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.441181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.441321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.441356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.441491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.441526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.441635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.441668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.441791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.441849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.441977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.442014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.442124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.442159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.442267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.442302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.442407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.442442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.442560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.442596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.442721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.442769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.442892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-12-06 01:15:19.442929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-12-06 01:15:19.443067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.443101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.443210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.443244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.443378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.443412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.443514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.443548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.443695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.443743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.443870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.443908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.444071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.444119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.444252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.444287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.444393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.444426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.444542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.444576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.444707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.444739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.444842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.444876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.445002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.445036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.445148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.445182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.445284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.445318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.445447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.445480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.445621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.445655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.445757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.445791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.445915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.445964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.446087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.446131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.446244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.446280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.446401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.446435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.446545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.446587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.446733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.446766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.446880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.446915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.447031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.447070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.447214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.447248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.447370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.447403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.447534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.447581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.447718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.447752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.447893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.447942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.448066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.448104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.448213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.448249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.448374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.448410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.448547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.448583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.448697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.448730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.448850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.448886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-12-06 01:15:19.448994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-12-06 01:15:19.449027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.449167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.449200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.449312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.449346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.449457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.449505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.449610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.449645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.449758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.449793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.449941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.449974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.450087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.450121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.450236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.450270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.450438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.450474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.450610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.450643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.450746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.450780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.450889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.450931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.451038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.451072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.451225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.451260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.451393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.451426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.451543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.451577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.451709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.451742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.451881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.451925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.452041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.452084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.452234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.452268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.452381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.452414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.452558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.452598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.452736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.452771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.452898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.452931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.453065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.453106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.453240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.453274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.453377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.453411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.453542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.453575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.453707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.453740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.453892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.453941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.454104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.454140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.454290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.454323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.454441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.454476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.454613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.454647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.454747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.454780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.454917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.454953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.455091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.455131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-12-06 01:15:19.455234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-12-06 01:15:19.455276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.455414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.455447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.455551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.455584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.455716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.455749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.455859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.455895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.456029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.456063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.456207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.456242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.456386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.456420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.456582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.456616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.456734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.456768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.456895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.456938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.457106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.457152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.457328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.457369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.457502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.457538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.457699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.457732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.457861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.457910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.458056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.458093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.458247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.458283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.458423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.458456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.458564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.458599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.458705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.458739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.458902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.458951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.459094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.459132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.459261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.459296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.459467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.459510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.459653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.459687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.459808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.459851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.459989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.460023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.460136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.460169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.460323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.460356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.460467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.460501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.460630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.460664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.460787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.460855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.460974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.461009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.461121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.461153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.461255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.461288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.461392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.461424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.461535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-12-06 01:15:19.461568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-12-06 01:15:19.461708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.461742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.461865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.461905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.462020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.462057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.462196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.462231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.462389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.462424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.462529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.462563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.462696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.462732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.462881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.462915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.463017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.463050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.463165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.463200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.463332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.463365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.463465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.463499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.463629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.463662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.463770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.463820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.463933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.463967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.464075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.464118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.464220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.464253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.464358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.464391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.464506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.464543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.464681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.464717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.464857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.464892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.464998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.465032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.465144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.465179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.465312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.465345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.465456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.465492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.465602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.465635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.465745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.465783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.465954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.465988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.466091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.466137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.466281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.466316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.466432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.466467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.466626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.466660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.466811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-12-06 01:15:19.466851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-12-06 01:15:19.466990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.467036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.467155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.467190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.467299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.467333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.467433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.467467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.467577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.467613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.467719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.467751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.467905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.467938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.468059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.468094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.468207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.468241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.468348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.468381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.468514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.468548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.468647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.468681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.468810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.468865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.469009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.469044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.469156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.469192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.469324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.469358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.469496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.469530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.469662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.469694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.469811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.469852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.469964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.469999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.470126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.470170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.470287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.470322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.470433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.470465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.470600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.470633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.470741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.470774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.470895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.470929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.471029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.471063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.471200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.471233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.471335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.471369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.471485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.471521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.471630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.471665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.471773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.471807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.471925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.471960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.472104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.472143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.472247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.472282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.472413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.472447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.472605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.472639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-12-06 01:15:19.472746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-12-06 01:15:19.472779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.472919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.472953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.473054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.473087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.473196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.473228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.473340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.473376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.473486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.473520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.473622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.473656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.473810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.473851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.473962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.473996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.474100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.474133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.474274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.474308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.474436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.474470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.474576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.474610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.474732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.474781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.474943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.474980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.475091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.475130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.475242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.475277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.475409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.475442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.475549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.475583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.475715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.475748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.475856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.475889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.476042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.476090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.476210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.476245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.476376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.476419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.476539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.476573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.476678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.476711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.476847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.476880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.477011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.477044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.477221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.477254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.477366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.477399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.477507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.477540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.477672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.477704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.477832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.477881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.477996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.478033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.478147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.478182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.478317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.478351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.478453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.478493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.478602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.478636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-12-06 01:15:19.478770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-12-06 01:15:19.478804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.478920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.478955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.479076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.479134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.479307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.479344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.479449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.479484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.479620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.479655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.479770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.479826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.479949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.479984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.480090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.480126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.480232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.480266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.480375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.480409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.480517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.480551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.480666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.480703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.480844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.480879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.480993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.481028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.481143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.481186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.481320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.481353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.481487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.481521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.481628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.481660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.481809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.481883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.481999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.482035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.482186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.482234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.482396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.482431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.482571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.482605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.482745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.482777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.482949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.482988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.483122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.483170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.483290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.483328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.483437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.483472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.483585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.483617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.483737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.483786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.483936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.483972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.484090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.484130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.484242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.484276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.484387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.484421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.484531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.484569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.484687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.484722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.484839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-12-06 01:15:19.484872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-12-06 01:15:19.485007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.485039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.485191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.485224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.485331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.485363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.485479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.485515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.485631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.485668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.485796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.485861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.485980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.486014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.486141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.486174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.486283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.486316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.486422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.486454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.486573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.486609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.486735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.486785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.486931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.486980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.487122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.487158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.487284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.487319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.487432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.487466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.487580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.487616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.487727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.487762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.487886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.487921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.488041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.488075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.488216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.488250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.488361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.488395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.488510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.488545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.488654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.488689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.488828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.488877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-12-06 01:15:19.488995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-12-06 01:15:19.489032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:47.128 [2024-12-06 01:15:19.489153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.128 [2024-12-06 01:15:19.489196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.128 qpair failed and we were unable to recover it. 00:37:47.128 [2024-12-06 01:15:19.489307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.128 [2024-12-06 01:15:19.489348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.489495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.489531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.489648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.489682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.489779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.489830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.489951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.489986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.490092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.490137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.490250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.490283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.490433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.490469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.490589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.490624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.490768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.490805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.490923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.490957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.491066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.491107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.491237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.491272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.491377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.491412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.491555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.491591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.491705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.491740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.491871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.491907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.492021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.492055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.492196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.492231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.492341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.492375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.492504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.492538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.492650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.492686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.492806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.492849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.492984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.493019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.493132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.493167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.493303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.493336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.493468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.493503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.493647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.493681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.493822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.493857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.494013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.494061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.494185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.494221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.494363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.494398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.494511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.494546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.494650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.494683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.494794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.494838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.494956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.494991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.495116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.495164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.129 [2024-12-06 01:15:19.495279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.129 [2024-12-06 01:15:19.495314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.129 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.495421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.495454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.495589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.495623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.495730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.495769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.495942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.495979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.496090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.496126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.496233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.496268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.496376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.496408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.496519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.496551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.496666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.496700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.496823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.496859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.496963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.496997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.497110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.497146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.497254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.497301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.497438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.497471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.497574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.497607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.497719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.497753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.497886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.497935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.498080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.498116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.498226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.498259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.498366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.498398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.498504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.498536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.498673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.498705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.498821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.498856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.498959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.498992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.499116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.499165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.499322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.499357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.499463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.499496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.499596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.499629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.499762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.499796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.499924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.499973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.500117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.500151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.500261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.500293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.500431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.500464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.500575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.500608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.500769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.500801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.500945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.500977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.501097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.501137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.130 qpair failed and we were unable to recover it. 00:37:47.130 [2024-12-06 01:15:19.501250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.130 [2024-12-06 01:15:19.501284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.501395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.501429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.501543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.501577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.501689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.501723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.501863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.501898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.502004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.502043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.502179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.502212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.502330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.502362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.502502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.502535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.502648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.502681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.502792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.502834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.502946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.502979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.503111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.503143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.503248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.503282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.503398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.503434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.503542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.503577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.503743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.503777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.503918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.503953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.504063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.504097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.504234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.504268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.504378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.504413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.504550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.504583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.504705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.504754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.504905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.504942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.505119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.505167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.505286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.505324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.505452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.505488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.505597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.505643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.505754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.505788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.505934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.505970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.506108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.506143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.506244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.506278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.506393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.506428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.506547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.506585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.506713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.506762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.506895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.506931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.507042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.507077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.507212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.507245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.131 [2024-12-06 01:15:19.507388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.131 [2024-12-06 01:15:19.507425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.131 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.507569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.507604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.507736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.507771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.507887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.507922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.508054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.508087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.508203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.508237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.508348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.508383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.508494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.508535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.508642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.508677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.508785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.508829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.508936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.508971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.509102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.509151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.509264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.509300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.509406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.509440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.509600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.509633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.509776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.509811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.509931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.509965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.510103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.510139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.510279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.510317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.510428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.510462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.510578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.510612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.510756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.510791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.510909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.510944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.511055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.511090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.511229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.511264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.511400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.511435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.511540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.511575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.511686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.511721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.511870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.511918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.512049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.512098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.512211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.512247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.512389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.512424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.512529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.512563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.512675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.512709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.512859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.512896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.513030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.513105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.513221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.513258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.513391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.513424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.132 [2024-12-06 01:15:19.513590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.132 [2024-12-06 01:15:19.513623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.132 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.513737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.513771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.513919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.513955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.514097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.514132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.514271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.514305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.514420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.514454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.514604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.514652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.514787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.514843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.514990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.515026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.515157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.515197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.515304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.515338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.515460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.515509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.515629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.515665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.515806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.515848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.515959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.515993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.516100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.516134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.516270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.516306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.516420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.516453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.516597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.516635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.516745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.516779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.516924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.516958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.517065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.517101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.517234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.517267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.517400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.517433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.517541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.517576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.517685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.517719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.517829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.517864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.517974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.518008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.518112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.518146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.518253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.518287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.518425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.518459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.518610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.518658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.518777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.133 [2024-12-06 01:15:19.518821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.133 qpair failed and we were unable to recover it. 00:37:47.133 [2024-12-06 01:15:19.518926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.518962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.519081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.519115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.519225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.519260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.519372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.519407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.519515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.519550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.519696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.519745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.519892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.519940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.520058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.520093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.520229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.520263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.520375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.520409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.520547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.520582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.520714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.520750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.520876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.520911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.521045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.521079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.521194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.521228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.521337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.521371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.521508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.521548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.521694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.521729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.521881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.521931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.522075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.522111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.522224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.522257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.522393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.522426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.522534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.522569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.522693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.522741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.522911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.522960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.523111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.523149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.523261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.523297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.523435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.523471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.523588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.523623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.523833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.523881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.524008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.524043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.524161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.524194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.524306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.524339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.524458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.524491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.524601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.524633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.524765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.524798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.524915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.524949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.525090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.134 [2024-12-06 01:15:19.525123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.134 qpair failed and we were unable to recover it. 00:37:47.134 [2024-12-06 01:15:19.525229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.525261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.525365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.525397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.525500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.525533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.525659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.525709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.525856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.525905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.526044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.526093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.526238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.526273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.526435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.526469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.526604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.526637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.526774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.526809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.526932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.526966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.527071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.527105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.527225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.527258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.527396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.527429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.527535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.527568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.527708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.527742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.527884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.527933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.528080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.528117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.528255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.528310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.528479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.528513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.528627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.528660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.528770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.528802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.528948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.528981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.529117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.529149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.529281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.529313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.529410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.529443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.529555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.529590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.529727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.529761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.529883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.529916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.530032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.530065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.530168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.530202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.530330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.530364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.530503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.530536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.530672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.530704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.530811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.530858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.531030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.531080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.531229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.531265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.531416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.135 [2024-12-06 01:15:19.531451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.135 qpair failed and we were unable to recover it. 00:37:47.135 [2024-12-06 01:15:19.531573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.531609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.531759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.531793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.531948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.531984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.532099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.532134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.532246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.532280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.532382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.532415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.532552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.532585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.532695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.532728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.532843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.532876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.532995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.533031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.533181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.533215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.533324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.533358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.533489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.533524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.533635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.533669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.533805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.533861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.533981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.534016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.534147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.534197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.534325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.534362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.534500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.534535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.534648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.534683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.534799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.534854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.534964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.534998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.535129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.535163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.535291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.535325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.535442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.535477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.535619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.535655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.535765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.535800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.535985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.536035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.536151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.536186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.536329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.536362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.536475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.536508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.536681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.536715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.536829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.536876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.536990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.537025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.537148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.537181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.537292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.537326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.537459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.537493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.537629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-12-06 01:15:19.537662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-12-06 01:15:19.537795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.537860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.537981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.538016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.538128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.538163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.538307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.538341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.538459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.538494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.538602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.538637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.538745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.538780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.538918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.538951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.539082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.539130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.539254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.539289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.539458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.539492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.539599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.539633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.539746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.539779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.539916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.539965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.540087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.540122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.540261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.540295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.540434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.540467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.540581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.540614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.540780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.540833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.540955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.540991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.541127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.541161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.541270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.541304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.541437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.541476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.541589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.541622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.541740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.541774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.541936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.541986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.542111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.542148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.542260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.542293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.542461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.542496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.542641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.542675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.542795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.542840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.542958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.542993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.543159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.543208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.543356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.543393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.543530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.543565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.543674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.543709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.543843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.543878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.544003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.544037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-12-06 01:15:19.544185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-12-06 01:15:19.544220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.544337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.544375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.544502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.544551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.544672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.544710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.544849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.544884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.545003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.545039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.545147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.545182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.545328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.545377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.545496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.545531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.545648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.545682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.545794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.545835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.545959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.545997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.546119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.546153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.546293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.546326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.546464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.546498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.546632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.546665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.546791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.546832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.546950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.546984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.547093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.547127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.547240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.547275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.547411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.547445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.547578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.547612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.547713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.547747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.547897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.547933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.548055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.548109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.548229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.548265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.548369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.548403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.548516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.548549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.548668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.548716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.548914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.548950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.549062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.549097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.549234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.549267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.549375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.549408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.549516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.549550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-12-06 01:15:19.549683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-12-06 01:15:19.549716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.549827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.549862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.549973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.550008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.550147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.550183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.550328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.550361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.550469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.550502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.550609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.550648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.550755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.550787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.550937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.550972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.551087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.551122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.551261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.551295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.551406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.551443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.551554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.551589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.551726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.551759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.551875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.551908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.552017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.552052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.552154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.552187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.552293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.552327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.552446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.552482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.552615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.552648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.552787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.552827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.552964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.552999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.553121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.553166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.553294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.553350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.553462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.553497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.553606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.553639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.553780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.553835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.553961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.553998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.554112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.554146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.554249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.554283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.554405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.554446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.554549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.554584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.554705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.554740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.554880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.554914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.555019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.555051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.555183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.555217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.555356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.555392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.555526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.555560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.555689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-12-06 01:15:19.555725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-12-06 01:15:19.555839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.555874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.555984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.556018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.556151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.556183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.556295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.556328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.556435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.556469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.556604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.556637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.556759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.556807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.556953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.557002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.557111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.557147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.557295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.557329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.557444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.557477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.557587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.557620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.557751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.557783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.557925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.557957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.558060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.558092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.558222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.558255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.558361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.558394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.558501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.558534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.558648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.558685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.558805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.558861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.558981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.559018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.559126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.559161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.559319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.559353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.559453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.559487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.559597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.559633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.559749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.559785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.559916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.559965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.560110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.560145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.560253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.560285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.560443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.560476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.560578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.560611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.560725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.560766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.560889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.560925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.561031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.561065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.561177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.561210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.561355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.561389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.561502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.561536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.561643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-12-06 01:15:19.561678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-12-06 01:15:19.561795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.561835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.561958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.562007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.562121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.562158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.562311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.562359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.562479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.562515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.562631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.562666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.562803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.562846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.562968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.563002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.563111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.563144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.563307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.563341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.563489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.563538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.563649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.563685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.563812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.563870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.564016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.564053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.564168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.564204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.564421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.564456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.564590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.564625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.564750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.564798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.564934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.564970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.565072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.565106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.565231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.565267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.565401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.565436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.565549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.565585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.565701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.565737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.565874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.565909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.566023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.566058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.566165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.566200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.566320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.566368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.566491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.566528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.566685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.566733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.566862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.566897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.567030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.567067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.567174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.567208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.567354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.567388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.567544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.567580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.567689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.567726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.567839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.567877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.568015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-12-06 01:15:19.568049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-12-06 01:15:19.568188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.568221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.568354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.568388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.568502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.568538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.568699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.568748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.568886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.568934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.569074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.569111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.569234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.569269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.569381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.569416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.569548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.569582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.569696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.569731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.569876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.569915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.570031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.570067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.570174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.570207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.570316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.570348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.570484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.570517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.570619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.570652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.570793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.570835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.570949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.570985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.571110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.571160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.571276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.571309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.571419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.571452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.571549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.571582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.571691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.571729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.571845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.571881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.571988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.572023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.572137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.572174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.572331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.572366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.572503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.572540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.572646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.572681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.572794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.572835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.572950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.572985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.573093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.573129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.573238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.573272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.573386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.573420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-12-06 01:15:19.573556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-12-06 01:15:19.573590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.573752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.573801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.573942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.573979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.574087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.574121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.574222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.574256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.574390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.574422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.574531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.574567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.574673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.574708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.574849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.574883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.574989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.575024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.575158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.575192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.575300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.575334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.575441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.575476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.575592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.575629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.575737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.575772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.575898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.575934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.576063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.576098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.576208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.576243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.576356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.576390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.576505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.576540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.576671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.576705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.576836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.576885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.577007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.577044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.577164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.577200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.577331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.577366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.577502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.577537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.577645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.577679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.577784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.577824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.577956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.577996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.578107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.578141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.578247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.578281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.578390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.578424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.578554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.578615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.578857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.578906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.579026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.579063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.579200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.579235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.579339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.579374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.579516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.579551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.579667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-12-06 01:15:19.579702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-12-06 01:15:19.579846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.579880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.579987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.580021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.580129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.580163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.580300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.580334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.580442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.580476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.580618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.580652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.580773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.580830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.580955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.581003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.581127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.581163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.581274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.581308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.581481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.581518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.581629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.581675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.581790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.581833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.581955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.582004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.582151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.582188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.582324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.582359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.582480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.582514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.582634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.582681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.582834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.582871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.583004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.583040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.583180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.583214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.583361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.583410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.583525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.583561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.583676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.583711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.583810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.583851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.583986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.584020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.584133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.584169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.584324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.584359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.584510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.584558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.584673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.584713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.584835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.584893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.585004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.585039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.585147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.585181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.585318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.585352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.585497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.585531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.585630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.585664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.585802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.585845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.585958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-12-06 01:15:19.585993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-12-06 01:15:19.586109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.586147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.586288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.586323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.586455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.586489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.586602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.586637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.586744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.586779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.586901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.586937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.587049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.587083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.587196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.587230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.587366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.587412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.587519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.587553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.587701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.587735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.587839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.587874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.588010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.588044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.588148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.588183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.588345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.588379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.588488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.588522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.588628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.588663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.588794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.588841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.588960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.588994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.589132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.589166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.589275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.589309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.589419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.589453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.589564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.589597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.589729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.589764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.589882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.589916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.590022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.590056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.590176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.590224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.590363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.590400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.590535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.590571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.590692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.590727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.590844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.590879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.590990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.591033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.591145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.591180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.591315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.591349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.591468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.591516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.591631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.591667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.591790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.591850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.591971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.592009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-12-06 01:15:19.592121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-12-06 01:15:19.592156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.592272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.592306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.592418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.592453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.592550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.592585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.592715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.592750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.592901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.592937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.593061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.593108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.593235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.593271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.593407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.593440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.593546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.593579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.593695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.593731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.593897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.593946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.594072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.594110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.594223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.594258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.594367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.594411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.594551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.594587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.594742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.594790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.594956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.595004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.595147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.595182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.595320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.595355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.595499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.595534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.595639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.595672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.595783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.595826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.595961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.596009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.596127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.596164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.596389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.596426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.596561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.596596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.596698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.596733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.596843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.596878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.597015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.597049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.597195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.597230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.597384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.597433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.597554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.597590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.597731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.597771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.597896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.597930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.598039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.598073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.598207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.598241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.598376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.598411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-12-06 01:15:19.598541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-12-06 01:15:19.598576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.598677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.598711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.598846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.598881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.599013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.599062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.599178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.599214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.599331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.599367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.599503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.599538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.599703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.599737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.599853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.599901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.600053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.600091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.600242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.600278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.600385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.600420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.600538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.600573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.600684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.600725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.600854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.600903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.601072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.601109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.601225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.601261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.601369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.601403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.601515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.601549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.601659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.601693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.601807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.601852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.601960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.601995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.602140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.602175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.602311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.602346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.602450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.602486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.602593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.602627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.602740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.602776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 A controller has encountered a failure and is being reset. 00:37:47.147 [2024-12-06 01:15:19.602914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.602949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.603065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.603100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.603245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.603280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.603389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.603424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.603567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.603601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.603724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.603772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.603888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.603924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-12-06 01:15:19.604038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-12-06 01:15:19.604075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.148 qpair failed and we were unable to recover it. 00:37:47.148 [2024-12-06 01:15:19.604277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.148 [2024-12-06 01:15:19.604321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:47.148 [2024-12-06 01:15:19.604350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:37:47.148 [2024-12-06 01:15:19.604393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:37:47.148 [2024-12-06 01:15:19.604424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:37:47.148 [2024-12-06 01:15:19.604451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:37:47.148 [2024-12-06 01:15:19.604481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:37:47.148 Unable to reset the controller. 00:37:47.405 01:15:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:47.405 01:15:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:37:47.405 01:15:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:47.405 01:15:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:47.405 01:15:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.405 01:15:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:47.405 01:15:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:47.405 01:15:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.405 01:15:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.405 Malloc0 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.405 [2024-12-06 01:15:20.056476] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.405 [2024-12-06 01:15:20.086785] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.405 01:15:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3142764 00:37:47.969 Controller properly reset. 00:37:53.228 Initializing NVMe Controllers 00:37:53.228 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:53.228 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:53.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:53.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:53.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:53.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:53.228 Initialization complete. Launching workers. 00:37:53.228 Starting thread on core 1 00:37:53.228 Starting thread on core 2 00:37:53.228 Starting thread on core 3 00:37:53.228 Starting thread on core 0 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:53.228 00:37:53.228 real 0m11.651s 00:37:53.228 user 0m35.696s 00:37:53.228 sys 0m7.709s 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:53.228 ************************************ 00:37:53.228 END TEST nvmf_target_disconnect_tc2 00:37:53.228 ************************************ 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:53.228 rmmod nvme_tcp 00:37:53.228 rmmod nvme_fabrics 00:37:53.228 rmmod nvme_keyring 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3143177 ']' 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3143177 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3143177 ']' 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3143177 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3143177 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3143177' 00:37:53.228 killing process with pid 3143177 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3143177 00:37:53.228 01:15:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3143177 00:37:54.159 01:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:54.159 01:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:54.159 01:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:54.159 01:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:37:54.159 01:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:37:54.159 01:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:54.159 01:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:37:54.159 01:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:54.159 01:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:54.159 01:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:54.159 01:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:54.159 01:15:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:56.060 01:15:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:56.060 00:37:56.060 real 0m17.632s 00:37:56.060 user 1m4.163s 00:37:56.060 sys 0m10.327s 00:37:56.060 01:15:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:56.060 01:15:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:56.060 ************************************ 00:37:56.060 END TEST nvmf_target_disconnect 00:37:56.060 ************************************ 00:37:56.060 01:15:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:56.060 00:37:56.060 real 7m38.790s 00:37:56.060 user 19m53.398s 00:37:56.060 sys 1m31.860s 00:37:56.060 01:15:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:56.060 01:15:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.060 ************************************ 00:37:56.060 END TEST nvmf_host 00:37:56.060 ************************************ 00:37:56.060 01:15:28 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:37:56.060 01:15:28 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:37:56.060 01:15:28 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:56.060 01:15:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:56.060 01:15:28 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:56.060 01:15:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:56.060 ************************************ 00:37:56.060 START TEST nvmf_target_core_interrupt_mode 00:37:56.060 ************************************ 00:37:56.060 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:56.323 * Looking for test storage... 00:37:56.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:37:56.323 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:56.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.324 --rc genhtml_branch_coverage=1 00:37:56.324 --rc genhtml_function_coverage=1 00:37:56.324 --rc genhtml_legend=1 00:37:56.324 --rc geninfo_all_blocks=1 00:37:56.324 --rc geninfo_unexecuted_blocks=1 00:37:56.324 00:37:56.324 ' 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:56.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.324 --rc genhtml_branch_coverage=1 00:37:56.324 --rc genhtml_function_coverage=1 00:37:56.324 --rc genhtml_legend=1 00:37:56.324 --rc geninfo_all_blocks=1 00:37:56.324 --rc geninfo_unexecuted_blocks=1 00:37:56.324 00:37:56.324 ' 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:56.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.324 --rc genhtml_branch_coverage=1 00:37:56.324 --rc genhtml_function_coverage=1 00:37:56.324 --rc genhtml_legend=1 00:37:56.324 --rc geninfo_all_blocks=1 00:37:56.324 --rc geninfo_unexecuted_blocks=1 00:37:56.324 00:37:56.324 ' 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:56.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.324 --rc genhtml_branch_coverage=1 00:37:56.324 --rc genhtml_function_coverage=1 00:37:56.324 --rc genhtml_legend=1 00:37:56.324 --rc geninfo_all_blocks=1 00:37:56.324 --rc geninfo_unexecuted_blocks=1 00:37:56.324 00:37:56.324 ' 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:37:56.324 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:37:56.325 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:56.325 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:56.325 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:56.325 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:56.325 ************************************ 00:37:56.325 START TEST nvmf_abort 00:37:56.325 ************************************ 00:37:56.325 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:56.325 * Looking for test storage... 00:37:56.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:56.325 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:56.325 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:37:56.325 01:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:37:56.325 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:56.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.584 --rc genhtml_branch_coverage=1 00:37:56.584 --rc genhtml_function_coverage=1 00:37:56.584 --rc genhtml_legend=1 00:37:56.584 --rc geninfo_all_blocks=1 00:37:56.584 --rc geninfo_unexecuted_blocks=1 00:37:56.584 00:37:56.584 ' 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:56.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.584 --rc genhtml_branch_coverage=1 00:37:56.584 --rc genhtml_function_coverage=1 00:37:56.584 --rc genhtml_legend=1 00:37:56.584 --rc geninfo_all_blocks=1 00:37:56.584 --rc geninfo_unexecuted_blocks=1 00:37:56.584 00:37:56.584 ' 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:56.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.584 --rc genhtml_branch_coverage=1 00:37:56.584 --rc genhtml_function_coverage=1 00:37:56.584 --rc genhtml_legend=1 00:37:56.584 --rc geninfo_all_blocks=1 00:37:56.584 --rc geninfo_unexecuted_blocks=1 00:37:56.584 00:37:56.584 ' 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:56.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.584 --rc genhtml_branch_coverage=1 00:37:56.584 --rc genhtml_function_coverage=1 00:37:56.584 --rc genhtml_legend=1 00:37:56.584 --rc geninfo_all_blocks=1 00:37:56.584 --rc geninfo_unexecuted_blocks=1 00:37:56.584 00:37:56.584 ' 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:56.584 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:37:56.585 01:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:58.493 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:58.493 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:58.493 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:58.494 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:58.494 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:58.494 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:58.752 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:58.752 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:58.752 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:58.752 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:58.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:58.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:37:58.752 00:37:58.752 --- 10.0.0.2 ping statistics --- 00:37:58.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:58.752 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:37:58.752 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:58.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:58.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:37:58.752 00:37:58.752 --- 10.0.0.1 ping statistics --- 00:37:58.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:58.752 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:37:58.752 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:58.752 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:37:58.752 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:58.752 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:58.752 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:58.752 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:58.752 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:58.753 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:58.753 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:58.753 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:37:58.753 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:58.753 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:58.753 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:58.753 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3146125 00:37:58.753 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:58.753 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3146125 00:37:58.753 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3146125 ']' 00:37:58.753 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:58.753 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:58.753 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:58.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:58.753 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:58.753 01:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:58.753 [2024-12-06 01:15:31.355203] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:58.753 [2024-12-06 01:15:31.357892] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:37:58.753 [2024-12-06 01:15:31.358009] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:59.012 [2024-12-06 01:15:31.502719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:59.012 [2024-12-06 01:15:31.628605] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:59.012 [2024-12-06 01:15:31.628694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:59.012 [2024-12-06 01:15:31.628718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:59.012 [2024-12-06 01:15:31.628736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:59.012 [2024-12-06 01:15:31.628754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:59.012 [2024-12-06 01:15:31.631194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:59.012 [2024-12-06 01:15:31.631237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:59.012 [2024-12-06 01:15:31.631246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:59.591 [2024-12-06 01:15:32.007657] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:59.591 [2024-12-06 01:15:32.008798] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:59.591 [2024-12-06 01:15:32.009703] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:59.591 [2024-12-06 01:15:32.010033] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.850 [2024-12-06 01:15:32.336307] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.850 Malloc0 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.850 Delay0 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.850 [2024-12-06 01:15:32.464534] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.850 01:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:38:00.108 [2024-12-06 01:15:32.593382] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:02.639 Initializing NVMe Controllers 00:38:02.639 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:02.639 controller IO queue size 128 less than required 00:38:02.639 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:38:02.639 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:38:02.639 Initialization complete. Launching workers. 00:38:02.639 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 23061 00:38:02.639 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 23118, failed to submit 66 00:38:02.639 success 23061, unsuccessful 57, failed 0 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:02.639 rmmod nvme_tcp 00:38:02.639 rmmod nvme_fabrics 00:38:02.639 rmmod nvme_keyring 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3146125 ']' 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3146125 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3146125 ']' 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3146125 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3146125 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3146125' 00:38:02.639 killing process with pid 3146125 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3146125 00:38:02.639 01:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3146125 00:38:03.577 01:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:03.577 01:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:03.577 01:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:03.577 01:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:38:03.577 01:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:38:03.577 01:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:38:03.577 01:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:03.577 01:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:03.577 01:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:03.577 01:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:03.577 01:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:03.577 01:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:06.116 00:38:06.116 real 0m9.315s 00:38:06.116 user 0m11.544s 00:38:06.116 sys 0m3.130s 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:06.116 ************************************ 00:38:06.116 END TEST nvmf_abort 00:38:06.116 ************************************ 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:06.116 ************************************ 00:38:06.116 START TEST nvmf_ns_hotplug_stress 00:38:06.116 ************************************ 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:06.116 * Looking for test storage... 00:38:06.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:06.116 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:06.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.117 --rc genhtml_branch_coverage=1 00:38:06.117 --rc genhtml_function_coverage=1 00:38:06.117 --rc genhtml_legend=1 00:38:06.117 --rc geninfo_all_blocks=1 00:38:06.117 --rc geninfo_unexecuted_blocks=1 00:38:06.117 00:38:06.117 ' 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:06.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.117 --rc genhtml_branch_coverage=1 00:38:06.117 --rc genhtml_function_coverage=1 00:38:06.117 --rc genhtml_legend=1 00:38:06.117 --rc geninfo_all_blocks=1 00:38:06.117 --rc geninfo_unexecuted_blocks=1 00:38:06.117 00:38:06.117 ' 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:06.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.117 --rc genhtml_branch_coverage=1 00:38:06.117 --rc genhtml_function_coverage=1 00:38:06.117 --rc genhtml_legend=1 00:38:06.117 --rc geninfo_all_blocks=1 00:38:06.117 --rc geninfo_unexecuted_blocks=1 00:38:06.117 00:38:06.117 ' 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:06.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.117 --rc genhtml_branch_coverage=1 00:38:06.117 --rc genhtml_function_coverage=1 00:38:06.117 --rc genhtml_legend=1 00:38:06.117 --rc geninfo_all_blocks=1 00:38:06.117 --rc geninfo_unexecuted_blocks=1 00:38:06.117 00:38:06.117 ' 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:38:06.117 01:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:08.027 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:08.027 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:08.027 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:08.027 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:08.027 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:08.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:08.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:38:08.028 00:38:08.028 --- 10.0.0.2 ping statistics --- 00:38:08.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:08.028 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:08.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:08.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:38:08.028 00:38:08.028 --- 10.0.0.1 ping statistics --- 00:38:08.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:08.028 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3148604 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3148604 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3148604 ']' 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:08.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:08.028 01:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:08.028 [2024-12-06 01:15:40.536456] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:08.028 [2024-12-06 01:15:40.539223] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:38:08.028 [2024-12-06 01:15:40.539352] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:08.028 [2024-12-06 01:15:40.697786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:08.287 [2024-12-06 01:15:40.836477] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:08.287 [2024-12-06 01:15:40.836553] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:08.287 [2024-12-06 01:15:40.836583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:08.287 [2024-12-06 01:15:40.836606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:08.287 [2024-12-06 01:15:40.836627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:08.287 [2024-12-06 01:15:40.839226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:08.287 [2024-12-06 01:15:40.839309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:08.287 [2024-12-06 01:15:40.839317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:08.547 [2024-12-06 01:15:41.213631] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:08.548 [2024-12-06 01:15:41.214556] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:08.548 [2024-12-06 01:15:41.215452] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:08.548 [2024-12-06 01:15:41.215773] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:09.120 01:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:09.120 01:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:38:09.120 01:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:09.120 01:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:09.120 01:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:09.120 01:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:09.120 01:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:38:09.120 01:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:09.120 [2024-12-06 01:15:41.808396] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:09.379 01:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:09.637 01:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:09.896 [2024-12-06 01:15:42.424764] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:09.896 01:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:10.154 01:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:38:10.414 Malloc0 00:38:10.414 01:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:10.672 Delay0 00:38:10.672 01:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:10.931 01:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:38:11.190 NULL1 00:38:11.190 01:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:38:11.449 01:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3149032 00:38:11.449 01:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:38:11.449 01:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:11.449 01:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:11.708 01:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:11.965 01:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:38:11.965 01:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:38:12.223 true 00:38:12.223 01:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:12.223 01:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:12.788 01:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:12.788 01:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:38:12.788 01:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:38:13.048 true 00:38:13.048 01:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:13.048 01:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:13.617 01:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:13.617 01:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:38:13.617 01:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:38:13.875 true 00:38:13.875 01:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:13.875 01:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:14.810 Read completed with error (sct=0, sc=11) 00:38:15.068 01:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:15.326 01:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:38:15.326 01:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:38:15.585 true 00:38:15.585 01:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:15.585 01:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:15.843 01:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:16.102 01:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:38:16.102 01:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:38:16.360 true 00:38:16.360 01:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:16.360 01:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:16.619 01:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:16.877 01:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:38:16.877 01:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:38:17.135 true 00:38:17.135 01:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:17.136 01:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:18.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:18.074 01:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:18.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:18.333 01:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:38:18.333 01:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:38:18.591 true 00:38:18.591 01:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:18.591 01:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:18.850 01:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:19.108 01:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:38:19.108 01:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:38:19.367 true 00:38:19.367 01:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:19.367 01:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:19.626 01:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:19.884 01:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:38:19.884 01:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:38:20.142 true 00:38:20.142 01:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:20.142 01:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:21.517 01:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:21.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:21.517 01:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:38:21.518 01:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:38:21.776 true 00:38:21.776 01:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:21.776 01:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:22.034 01:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:22.293 01:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:38:22.293 01:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:38:22.561 true 00:38:22.562 01:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:22.562 01:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:23.499 01:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:23.499 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:23.499 01:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:38:23.499 01:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:38:23.757 true 00:38:23.757 01:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:23.757 01:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:24.324 01:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:24.324 01:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:38:24.324 01:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:38:24.582 true 00:38:24.582 01:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:24.582 01:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:25.521 01:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:25.521 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:25.779 01:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:38:25.779 01:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:38:26.036 true 00:38:26.037 01:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:26.037 01:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:26.295 01:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:26.553 01:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:38:26.553 01:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:38:26.811 true 00:38:26.811 01:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:26.811 01:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:27.069 01:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:27.328 01:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:38:27.328 01:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:38:27.587 true 00:38:27.845 01:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:27.845 01:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:28.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:28.780 01:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:29.039 01:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:38:29.039 01:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:38:29.298 true 00:38:29.298 01:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:29.298 01:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:29.557 01:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:29.815 01:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:38:29.815 01:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:38:30.073 true 00:38:30.073 01:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:30.073 01:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:30.330 01:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:30.589 01:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:38:30.589 01:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:38:30.847 true 00:38:30.847 01:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:30.847 01:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:31.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:31.784 01:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:32.042 01:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:38:32.042 01:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:38:32.300 true 00:38:32.300 01:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:32.300 01:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:32.558 01:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:32.816 01:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:38:32.816 01:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:38:33.382 true 00:38:33.382 01:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:33.382 01:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:33.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:33.949 01:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:34.206 01:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:38:34.206 01:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:38:34.463 true 00:38:34.463 01:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:34.463 01:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:34.720 01:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:34.978 01:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:38:34.978 01:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:38:35.235 true 00:38:35.235 01:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:35.235 01:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:35.510 01:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:36.078 01:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:38:36.078 01:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:38:36.078 true 00:38:36.078 01:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:36.078 01:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:37.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:37.026 01:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:37.284 01:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:38:37.284 01:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:38:37.543 true 00:38:37.543 01:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:37.543 01:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:38.151 01:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:38.151 01:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:38:38.151 01:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:38:38.474 true 00:38:38.474 01:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:38.474 01:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:38.756 01:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:39.014 01:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:38:39.014 01:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:38:39.272 true 00:38:39.272 01:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:39.272 01:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:40.209 01:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:40.467 01:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:38:40.467 01:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:38:40.725 true 00:38:40.725 01:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:40.725 01:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:40.983 01:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:41.241 01:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:38:41.241 01:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:38:41.499 true 00:38:41.499 01:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:41.499 01:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:41.757 01:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:41.757 Initializing NVMe Controllers 00:38:41.757 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:41.757 Controller IO queue size 128, less than required. 00:38:41.757 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:41.757 Controller IO queue size 128, less than required. 00:38:41.757 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:41.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:41.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:41.757 Initialization complete. Launching workers. 00:38:41.757 ======================================================== 00:38:41.757 Latency(us) 00:38:41.757 Device Information : IOPS MiB/s Average min max 00:38:41.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 354.26 0.17 134390.35 3449.60 1016400.12 00:38:41.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6195.27 3.03 20597.45 3518.29 394408.61 00:38:41.757 ======================================================== 00:38:41.757 Total : 6549.53 3.20 26752.41 3449.60 1016400.12 00:38:41.757 00:38:42.016 01:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:38:42.016 01:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:38:42.275 true 00:38:42.533 01:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3149032 00:38:42.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3149032) - No such process 00:38:42.534 01:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3149032 00:38:42.534 01:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:42.791 01:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:43.049 01:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:38:43.049 01:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:38:43.049 01:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:38:43.049 01:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:43.049 01:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:38:43.306 null0 00:38:43.306 01:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:43.306 01:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:43.306 01:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:38:43.563 null1 00:38:43.563 01:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:43.563 01:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:43.563 01:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:38:43.821 null2 00:38:43.821 01:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:43.821 01:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:43.821 01:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:38:44.079 null3 00:38:44.079 01:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:44.079 01:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:44.079 01:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:38:44.338 null4 00:38:44.338 01:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:44.338 01:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:44.338 01:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:38:44.597 null5 00:38:44.597 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:44.597 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:44.597 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:38:44.856 null6 00:38:44.856 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:44.856 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:44.856 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:38:45.115 null7 00:38:45.115 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:45.115 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:45.115 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:38:45.115 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.115 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.115 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:38:45.115 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.115 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:38:45.115 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.115 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.115 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.115 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:45.115 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.115 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:38:45.115 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.115 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.115 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3153035 3153036 3153038 3153040 3153042 3153044 3153046 3153048 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.116 01:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:45.375 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:45.375 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:45.375 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:45.375 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:45.375 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:45.375 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:45.375 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:45.375 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.633 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:46.200 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:46.200 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:46.201 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:46.201 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:46.201 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:46.201 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:46.201 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:46.201 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.460 01:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:46.719 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:46.719 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:46.719 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:46.719 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:46.719 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:46.719 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:46.719 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:46.719 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:46.977 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.977 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.977 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:46.977 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.977 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.977 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:46.977 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.977 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.977 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:46.977 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.977 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.977 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:46.977 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.977 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.977 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:46.977 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.978 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.978 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:46.978 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.978 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.978 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:46.978 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.978 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.978 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:47.237 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:47.237 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:47.237 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:47.237 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:47.237 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:47.237 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:47.237 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:47.237 01:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:47.495 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.495 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.495 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:47.495 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.495 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.495 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:47.495 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.495 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.495 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:47.495 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.496 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.496 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.496 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.496 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:47.496 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:47.496 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.496 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.496 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.496 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.496 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:47.496 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:47.496 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.496 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.496 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:47.754 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:47.754 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:47.754 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:47.754 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:47.754 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:47.754 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:47.754 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:47.754 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.322 01:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:48.322 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:48.580 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:48.580 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:48.580 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:48.580 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:48.580 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:48.581 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:48.581 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:48.839 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.839 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.839 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:48.839 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.839 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.839 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:48.839 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.839 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.839 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:48.839 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.839 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.839 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:48.840 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.840 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.840 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:48.840 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.840 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.840 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:48.840 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.840 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.840 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:48.840 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.840 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.840 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:49.099 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:49.099 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:49.099 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:49.099 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:49.099 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:49.099 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:49.099 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:49.099 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.358 01:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:49.616 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:49.616 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:49.616 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:49.616 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:49.616 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:49.616 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:49.616 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:49.616 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.875 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:50.134 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:50.134 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:50.392 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:50.392 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:50.392 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:50.392 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:50.392 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:50.392 01:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.650 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:50.909 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:50.909 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:50.909 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:50.909 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:50.909 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:50.909 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:50.909 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:50.909 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:51.167 rmmod nvme_tcp 00:38:51.167 rmmod nvme_fabrics 00:38:51.167 rmmod nvme_keyring 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3148604 ']' 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3148604 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3148604 ']' 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3148604 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:51.167 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3148604 00:38:51.426 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:51.426 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:51.426 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3148604' 00:38:51.426 killing process with pid 3148604 00:38:51.426 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3148604 00:38:51.426 01:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3148604 00:38:52.799 01:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:52.800 01:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:52.800 01:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:52.800 01:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:38:52.800 01:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:38:52.800 01:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:38:52.800 01:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:52.800 01:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:52.800 01:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:52.800 01:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:52.800 01:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:52.800 01:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:54.701 00:38:54.701 real 0m48.923s 00:38:54.701 user 3m20.503s 00:38:54.701 sys 0m22.009s 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:54.701 ************************************ 00:38:54.701 END TEST nvmf_ns_hotplug_stress 00:38:54.701 ************************************ 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:54.701 ************************************ 00:38:54.701 START TEST nvmf_delete_subsystem 00:38:54.701 ************************************ 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:54.701 * Looking for test storage... 00:38:54.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:54.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:54.701 --rc genhtml_branch_coverage=1 00:38:54.701 --rc genhtml_function_coverage=1 00:38:54.701 --rc genhtml_legend=1 00:38:54.701 --rc geninfo_all_blocks=1 00:38:54.701 --rc geninfo_unexecuted_blocks=1 00:38:54.701 00:38:54.701 ' 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:54.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:54.701 --rc genhtml_branch_coverage=1 00:38:54.701 --rc genhtml_function_coverage=1 00:38:54.701 --rc genhtml_legend=1 00:38:54.701 --rc geninfo_all_blocks=1 00:38:54.701 --rc geninfo_unexecuted_blocks=1 00:38:54.701 00:38:54.701 ' 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:54.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:54.701 --rc genhtml_branch_coverage=1 00:38:54.701 --rc genhtml_function_coverage=1 00:38:54.701 --rc genhtml_legend=1 00:38:54.701 --rc geninfo_all_blocks=1 00:38:54.701 --rc geninfo_unexecuted_blocks=1 00:38:54.701 00:38:54.701 ' 00:38:54.701 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:54.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:54.701 --rc genhtml_branch_coverage=1 00:38:54.701 --rc genhtml_function_coverage=1 00:38:54.702 --rc genhtml_legend=1 00:38:54.702 --rc geninfo_all_blocks=1 00:38:54.702 --rc geninfo_unexecuted_blocks=1 00:38:54.702 00:38:54.702 ' 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:38:54.702 01:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:57.234 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:57.234 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:57.234 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:57.235 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:57.235 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:57.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:57.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:38:57.235 00:38:57.235 --- 10.0.0.2 ping statistics --- 00:38:57.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:57.235 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:57.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:57.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:38:57.235 00:38:57.235 --- 10.0.0.1 ping statistics --- 00:38:57.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:57.235 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3155985 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3155985 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3155985 ']' 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:57.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:57.235 01:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:57.235 [2024-12-06 01:16:29.614150] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:57.235 [2024-12-06 01:16:29.617260] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:38:57.235 [2024-12-06 01:16:29.617374] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:57.235 [2024-12-06 01:16:29.782537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:57.235 [2024-12-06 01:16:29.918962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:57.235 [2024-12-06 01:16:29.919047] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:57.235 [2024-12-06 01:16:29.919087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:57.235 [2024-12-06 01:16:29.919108] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:57.235 [2024-12-06 01:16:29.919143] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:57.235 [2024-12-06 01:16:29.921694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:57.235 [2024-12-06 01:16:29.921700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:57.802 [2024-12-06 01:16:30.295660] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:57.802 [2024-12-06 01:16:30.296490] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:57.802 [2024-12-06 01:16:30.296800] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:58.062 [2024-12-06 01:16:30.650796] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:58.062 [2024-12-06 01:16:30.671095] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:58.062 NULL1 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:58.062 Delay0 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3156188 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:58.062 01:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:38:58.408 [2024-12-06 01:16:30.816730] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:00.315 01:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:00.315 01:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.315 01:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 starting I/O failed: -6 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 starting I/O failed: -6 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 starting I/O failed: -6 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 starting I/O failed: -6 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 starting I/O failed: -6 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 starting I/O failed: -6 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 starting I/O failed: -6 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 starting I/O failed: -6 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 starting I/O failed: -6 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 [2024-12-06 01:16:32.947308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(6) to be set 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 [2024-12-06 01:16:32.950130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016b00 is same with the state(6) to be set 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 starting I/O failed: -6 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 starting I/O failed: -6 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 starting I/O failed: -6 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 starting I/O failed: -6 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 starting I/O failed: -6 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 starting I/O failed: -6 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 starting I/O failed: -6 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 starting I/O failed: -6 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 starting I/O failed: -6 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 starting I/O failed: -6 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 starting I/O failed: -6 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 [2024-12-06 01:16:32.951027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Write completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.315 Read completed with error (sct=0, sc=8) 00:39:00.316 Write completed with error (sct=0, sc=8) 00:39:00.316 Write completed with error (sct=0, sc=8) 00:39:00.316 Read completed with error (sct=0, sc=8) 00:39:00.316 Read completed with error (sct=0, sc=8) 00:39:00.316 Read completed with error (sct=0, sc=8) 00:39:00.316 Read completed with error (sct=0, sc=8) 00:39:00.316 Read completed with error (sct=0, sc=8) 00:39:00.316 Read completed with error (sct=0, sc=8) 00:39:00.316 Write completed with error (sct=0, sc=8) 00:39:00.316 Read completed with error (sct=0, sc=8) 00:39:00.316 Write completed with error (sct=0, sc=8) 00:39:00.316 Write completed with error (sct=0, sc=8) 00:39:00.316 Read completed with error (sct=0, sc=8) 00:39:00.316 Read completed with error (sct=0, sc=8) 00:39:00.316 Read completed with error (sct=0, sc=8) 00:39:00.316 Write completed with error (sct=0, sc=8) 00:39:00.316 Write completed with error (sct=0, sc=8) 00:39:00.316 Write completed with error (sct=0, sc=8) 00:39:00.316 Write completed with error (sct=0, sc=8) 00:39:00.316 Write completed with error (sct=0, sc=8) 00:39:00.316 Write completed with error (sct=0, sc=8) 00:39:00.316 Read completed with error (sct=0, sc=8) 00:39:00.316 Read completed with error (sct=0, sc=8) 00:39:00.316 Read completed with error (sct=0, sc=8) 00:39:00.316 Write completed with error (sct=0, sc=8) 00:39:00.316 Read completed with error (sct=0, sc=8) 00:39:00.316 Read completed with error (sct=0, sc=8) 00:39:00.316 Write completed with error (sct=0, sc=8) 00:39:00.316 Read completed with error (sct=0, sc=8) 00:39:00.316 Read completed with error (sct=0, sc=8) 00:39:00.316 Read completed with error (sct=0, sc=8) 00:39:00.316 Read completed with error (sct=0, sc=8) 00:39:00.316 Read completed with error (sct=0, sc=8) 00:39:01.254 [2024-12-06 01:16:33.916383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015c00 is same with the state(6) to be set 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Write completed with error (sct=0, sc=8) 00:39:01.254 Write completed with error (sct=0, sc=8) 00:39:01.254 Write completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Write completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 [2024-12-06 01:16:33.953549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Write completed with error (sct=0, sc=8) 00:39:01.254 Write completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Write completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Write completed with error (sct=0, sc=8) 00:39:01.254 [2024-12-06 01:16:33.954306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Write completed with error (sct=0, sc=8) 00:39:01.254 Write completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Write completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Write completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Write completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 [2024-12-06 01:16:33.955871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Write completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.254 Read completed with error (sct=0, sc=8) 00:39:01.255 Read completed with error (sct=0, sc=8) 00:39:01.255 Read completed with error (sct=0, sc=8) 00:39:01.255 Read completed with error (sct=0, sc=8) 00:39:01.255 Read completed with error (sct=0, sc=8) 00:39:01.255 Write completed with error (sct=0, sc=8) 00:39:01.255 Read completed with error (sct=0, sc=8) 00:39:01.255 Write completed with error (sct=0, sc=8) 00:39:01.255 Read completed with error (sct=0, sc=8) 00:39:01.255 Read completed with error (sct=0, sc=8) 00:39:01.255 Write completed with error (sct=0, sc=8) 00:39:01.255 Read completed with error (sct=0, sc=8) 00:39:01.255 Read completed with error (sct=0, sc=8) 00:39:01.255 Write completed with error (sct=0, sc=8) 00:39:01.255 Read completed with error (sct=0, sc=8) 00:39:01.255 Write completed with error (sct=0, sc=8) 00:39:01.255 Write completed with error (sct=0, sc=8) 00:39:01.255 [2024-12-06 01:16:33.956188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:39:01.255 01:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.255 01:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:39:01.255 01:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3156188 00:39:01.255 01:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:39:01.255 Initializing NVMe Controllers 00:39:01.255 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:01.255 Controller IO queue size 128, less than required. 00:39:01.255 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:01.255 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:01.255 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:01.255 Initialization complete. Launching workers. 00:39:01.255 ======================================================== 00:39:01.255 Latency(us) 00:39:01.255 Device Information : IOPS MiB/s Average min max 00:39:01.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 150.69 0.07 947845.19 2824.50 1047821.25 00:39:01.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.56 0.08 904199.17 696.38 1016966.02 00:39:01.255 ======================================================== 00:39:01.255 Total : 317.25 0.15 924931.03 696.38 1047821.25 00:39:01.255 00:39:01.255 [2024-12-06 01:16:33.961191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015c00 (9): Bad file descriptor 00:39:01.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3156188 00:39:01.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3156188) - No such process 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3156188 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3156188 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3156188 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:01.821 [2024-12-06 01:16:34.479055] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3156599 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156599 00:39:01.821 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:02.080 [2024-12-06 01:16:34.589542] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:02.338 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:02.338 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156599 00:39:02.338 01:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:02.903 01:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:02.903 01:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156599 00:39:02.903 01:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:03.470 01:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:03.470 01:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156599 00:39:03.470 01:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:04.035 01:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:04.035 01:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156599 00:39:04.035 01:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:04.598 01:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:04.598 01:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156599 00:39:04.598 01:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:04.855 01:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:04.855 01:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156599 00:39:04.855 01:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:05.113 Initializing NVMe Controllers 00:39:05.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:05.113 Controller IO queue size 128, less than required. 00:39:05.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:05.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:05.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:05.113 Initialization complete. Launching workers. 00:39:05.113 ======================================================== 00:39:05.113 Latency(us) 00:39:05.113 Device Information : IOPS MiB/s Average min max 00:39:05.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005840.69 1000311.70 1013909.96 00:39:05.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005857.53 1000239.36 1014935.09 00:39:05.113 ======================================================== 00:39:05.113 Total : 256.00 0.12 1005849.11 1000239.36 1014935.09 00:39:05.113 00:39:05.371 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:05.371 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156599 00:39:05.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3156599) - No such process 00:39:05.371 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3156599 00:39:05.371 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:39:05.371 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:39:05.371 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:05.371 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:39:05.371 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:05.371 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:39:05.371 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:05.371 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:05.371 rmmod nvme_tcp 00:39:05.371 rmmod nvme_fabrics 00:39:05.371 rmmod nvme_keyring 00:39:05.371 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:05.371 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:39:05.371 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:39:05.371 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3155985 ']' 00:39:05.372 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3155985 00:39:05.372 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3155985 ']' 00:39:05.372 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3155985 00:39:05.630 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:39:05.630 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:05.630 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3155985 00:39:05.630 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:05.630 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:05.630 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3155985' 00:39:05.630 killing process with pid 3155985 00:39:05.630 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3155985 00:39:05.630 01:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3155985 00:39:07.003 01:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:07.003 01:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:07.003 01:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:07.003 01:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:39:07.003 01:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:39:07.003 01:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:07.003 01:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:39:07.003 01:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:07.003 01:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:07.003 01:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:07.003 01:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:07.003 01:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:08.909 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:08.910 00:39:08.910 real 0m14.103s 00:39:08.910 user 0m26.313s 00:39:08.910 sys 0m4.043s 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:08.910 ************************************ 00:39:08.910 END TEST nvmf_delete_subsystem 00:39:08.910 ************************************ 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:08.910 ************************************ 00:39:08.910 START TEST nvmf_host_management 00:39:08.910 ************************************ 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:08.910 * Looking for test storage... 00:39:08.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:08.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.910 --rc genhtml_branch_coverage=1 00:39:08.910 --rc genhtml_function_coverage=1 00:39:08.910 --rc genhtml_legend=1 00:39:08.910 --rc geninfo_all_blocks=1 00:39:08.910 --rc geninfo_unexecuted_blocks=1 00:39:08.910 00:39:08.910 ' 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:08.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.910 --rc genhtml_branch_coverage=1 00:39:08.910 --rc genhtml_function_coverage=1 00:39:08.910 --rc genhtml_legend=1 00:39:08.910 --rc geninfo_all_blocks=1 00:39:08.910 --rc geninfo_unexecuted_blocks=1 00:39:08.910 00:39:08.910 ' 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:08.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.910 --rc genhtml_branch_coverage=1 00:39:08.910 --rc genhtml_function_coverage=1 00:39:08.910 --rc genhtml_legend=1 00:39:08.910 --rc geninfo_all_blocks=1 00:39:08.910 --rc geninfo_unexecuted_blocks=1 00:39:08.910 00:39:08.910 ' 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:08.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.910 --rc genhtml_branch_coverage=1 00:39:08.910 --rc genhtml_function_coverage=1 00:39:08.910 --rc genhtml_legend=1 00:39:08.910 --rc geninfo_all_blocks=1 00:39:08.910 --rc geninfo_unexecuted_blocks=1 00:39:08.910 00:39:08.910 ' 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:08.910 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:39:08.911 01:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:10.813 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:10.813 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:10.813 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:10.813 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:10.813 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:10.814 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:10.814 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:10.814 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:10.814 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:10.814 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:10.814 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:10.814 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:10.814 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:11.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:11.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:39:11.073 00:39:11.073 --- 10.0.0.2 ping statistics --- 00:39:11.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.073 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:11.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:11.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:39:11.073 00:39:11.073 --- 10.0.0.1 ping statistics --- 00:39:11.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.073 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3159066 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3159066 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3159066 ']' 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:11.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:11.073 01:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:11.073 [2024-12-06 01:16:43.677207] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:11.073 [2024-12-06 01:16:43.679683] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:39:11.073 [2024-12-06 01:16:43.679786] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:11.333 [2024-12-06 01:16:43.820486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:11.333 [2024-12-06 01:16:43.943250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:11.333 [2024-12-06 01:16:43.943343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:11.333 [2024-12-06 01:16:43.943369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:11.333 [2024-12-06 01:16:43.943387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:11.333 [2024-12-06 01:16:43.943407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:11.333 [2024-12-06 01:16:43.948852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:11.333 [2024-12-06 01:16:43.948941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:11.333 [2024-12-06 01:16:43.948981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:11.333 [2024-12-06 01:16:43.948994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:11.901 [2024-12-06 01:16:44.308292] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:11.901 [2024-12-06 01:16:44.309509] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:11.901 [2024-12-06 01:16:44.310686] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:11.901 [2024-12-06 01:16:44.311578] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:11.901 [2024-12-06 01:16:44.311915] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:12.160 [2024-12-06 01:16:44.654050] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:12.160 Malloc0 00:39:12.160 [2024-12-06 01:16:44.778342] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3159239 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3159239 /var/tmp/bdevperf.sock 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3159239 ']' 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:12.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:12.160 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:39:12.161 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:12.161 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:12.161 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:12.161 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:12.161 { 00:39:12.161 "params": { 00:39:12.161 "name": "Nvme$subsystem", 00:39:12.161 "trtype": "$TEST_TRANSPORT", 00:39:12.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:12.161 "adrfam": "ipv4", 00:39:12.161 "trsvcid": "$NVMF_PORT", 00:39:12.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:12.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:12.161 "hdgst": ${hdgst:-false}, 00:39:12.161 "ddgst": ${ddgst:-false} 00:39:12.161 }, 00:39:12.161 "method": "bdev_nvme_attach_controller" 00:39:12.161 } 00:39:12.161 EOF 00:39:12.161 )") 00:39:12.161 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:39:12.161 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:39:12.161 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:39:12.161 01:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:12.161 "params": { 00:39:12.161 "name": "Nvme0", 00:39:12.161 "trtype": "tcp", 00:39:12.161 "traddr": "10.0.0.2", 00:39:12.161 "adrfam": "ipv4", 00:39:12.161 "trsvcid": "4420", 00:39:12.161 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:12.161 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:12.161 "hdgst": false, 00:39:12.161 "ddgst": false 00:39:12.161 }, 00:39:12.161 "method": "bdev_nvme_attach_controller" 00:39:12.161 }' 00:39:12.421 [2024-12-06 01:16:44.900471] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:39:12.421 [2024-12-06 01:16:44.900597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159239 ] 00:39:12.421 [2024-12-06 01:16:45.036778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.678 [2024-12-06 01:16:45.164604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.243 Running I/O for 10 seconds... 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:39:13.243 01:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:39:13.502 01:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:39:13.502 01:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:13.502 01:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:13.502 01:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:13.502 01:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.502 01:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:13.502 01:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.502 01:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:39:13.502 01:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:39:13.502 01:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:39:13.502 01:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:39:13.502 01:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:39:13.502 01:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:13.502 01:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.502 01:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:13.502 [2024-12-06 01:16:46.198697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.198768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.198832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.198858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.198885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.198908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.198932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.198953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.198977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.198999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.199053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.199076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.199110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.199147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.199172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.199192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.199215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.199236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.199258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.199278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.199301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.199321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.199345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.199365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.199388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.199408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.199431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.199452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.199475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.199496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.199519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.199539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.199562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.199583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.199605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.199630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.199653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.199674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.199697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.199717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.199739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.199759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.199782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.199811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.199861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.199883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.502 [2024-12-06 01:16:46.199907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.502 [2024-12-06 01:16:46.199928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.199951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.199972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.199995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.200039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.200084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.200148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.200199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.200246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.200290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.200333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.200377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.200419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.200462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.200505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.200548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.200591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.200634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.200676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.200720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.200763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.200835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.200881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.200926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.200969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.200990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.201014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.201035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.201059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.201080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.201114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.201150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.201181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.201201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.201224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.201244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.201267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.201287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.201310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.201330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.201352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.201372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.201399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.201420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.201442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.201463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.201485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.201506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.201528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.201549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.201572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.201592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.201615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.201635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.503 [2024-12-06 01:16:46.201658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.503 [2024-12-06 01:16:46.201678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.504 [2024-12-06 01:16:46.201716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.504 [2024-12-06 01:16:46.201738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.504 [2024-12-06 01:16:46.201762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.504 [2024-12-06 01:16:46.201783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.504 [2024-12-06 01:16:46.202215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:13.504 [2024-12-06 01:16:46.202247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.504 [2024-12-06 01:16:46.202272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:13.504 01:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.504 [2024-12-06 01:16:46.202292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.504 [2024-12-06 01:16:46.202314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:13.504 [2024-12-06 01:16:46.202334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.504 [2024-12-06 01:16:46.202362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:13.504 [2024-12-06 01:16:46.202383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.504 [2024-12-06 01:16:46.202401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:39:13.504 01:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:13.504 01:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.504 01:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:13.504 [2024-12-06 01:16:46.203617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:39:13.504 task offset: 69120 on job bdev=Nvme0n1 fails 00:39:13.504 00:39:13.504 Latency(us) 00:39:13.504 [2024-12-06T00:16:46.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:13.504 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:13.504 Job: Nvme0n1 ended in about 0.40 seconds with error 00:39:13.504 Verification LBA range: start 0x0 length 0x400 00:39:13.504 Nvme0n1 : 0.40 1282.78 80.17 160.35 0.00 42997.09 4344.79 41166.32 00:39:13.504 [2024-12-06T00:16:46.213Z] =================================================================================================================== 00:39:13.504 [2024-12-06T00:16:46.213Z] Total : 1282.78 80.17 160.35 0.00 42997.09 4344.79 41166.32 00:39:13.504 [2024-12-06 01:16:46.208576] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:13.504 [2024-12-06 01:16:46.208634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:39:13.761 01:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.761 01:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:39:13.761 [2024-12-06 01:16:46.215257] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:39:14.702 01:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3159239 00:39:14.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3159239) - No such process 00:39:14.702 01:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:39:14.702 01:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:39:14.702 01:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:39:14.702 01:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:39:14.702 01:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:39:14.702 01:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:39:14.702 01:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:14.702 01:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:14.702 { 00:39:14.702 "params": { 00:39:14.702 "name": "Nvme$subsystem", 00:39:14.702 "trtype": "$TEST_TRANSPORT", 00:39:14.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:14.702 "adrfam": "ipv4", 00:39:14.702 "trsvcid": "$NVMF_PORT", 00:39:14.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:14.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:14.702 "hdgst": ${hdgst:-false}, 00:39:14.702 "ddgst": ${ddgst:-false} 00:39:14.702 }, 00:39:14.702 "method": "bdev_nvme_attach_controller" 00:39:14.702 } 00:39:14.702 EOF 00:39:14.702 )") 00:39:14.702 01:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:39:14.702 01:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:39:14.702 01:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:39:14.702 01:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:14.702 "params": { 00:39:14.702 "name": "Nvme0", 00:39:14.702 "trtype": "tcp", 00:39:14.702 "traddr": "10.0.0.2", 00:39:14.702 "adrfam": "ipv4", 00:39:14.702 "trsvcid": "4420", 00:39:14.702 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:14.702 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:14.702 "hdgst": false, 00:39:14.702 "ddgst": false 00:39:14.702 }, 00:39:14.702 "method": "bdev_nvme_attach_controller" 00:39:14.702 }' 00:39:14.702 [2024-12-06 01:16:47.298087] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:39:14.702 [2024-12-06 01:16:47.298217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159517 ] 00:39:14.961 [2024-12-06 01:16:47.433973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:14.962 [2024-12-06 01:16:47.563449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:15.528 Running I/O for 1 seconds... 00:39:16.464 1344.00 IOPS, 84.00 MiB/s 00:39:16.464 Latency(us) 00:39:16.464 [2024-12-06T00:16:49.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:16.464 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:16.464 Verification LBA range: start 0x0 length 0x400 00:39:16.464 Nvme0n1 : 1.01 1395.65 87.23 0.00 0.00 45073.46 10534.31 39807.05 00:39:16.464 [2024-12-06T00:16:49.173Z] =================================================================================================================== 00:39:16.464 [2024-12-06T00:16:49.173Z] Total : 1395.65 87.23 0.00 0.00 45073.46 10534.31 39807.05 00:39:17.403 01:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:39:17.403 01:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:39:17.403 01:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:39:17.403 01:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:17.403 01:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:39:17.403 01:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:17.403 01:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:39:17.403 01:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:17.403 01:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:39:17.403 01:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:17.403 01:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:17.403 rmmod nvme_tcp 00:39:17.403 rmmod nvme_fabrics 00:39:17.403 rmmod nvme_keyring 00:39:17.403 01:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:17.403 01:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:39:17.403 01:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:39:17.403 01:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3159066 ']' 00:39:17.403 01:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3159066 00:39:17.403 01:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3159066 ']' 00:39:17.403 01:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3159066 00:39:17.403 01:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:39:17.403 01:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:17.403 01:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3159066 00:39:17.403 01:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:17.403 01:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:17.403 01:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3159066' 00:39:17.403 killing process with pid 3159066 00:39:17.403 01:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3159066 00:39:17.403 01:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3159066 00:39:18.779 [2024-12-06 01:16:51.355910] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:39:18.779 01:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:18.779 01:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:18.779 01:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:18.779 01:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:39:18.779 01:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:39:18.779 01:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:39:18.779 01:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:18.779 01:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:18.779 01:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:18.779 01:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:18.779 01:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:18.779 01:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:21.318 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:21.318 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:39:21.318 00:39:21.318 real 0m12.133s 00:39:21.318 user 0m27.038s 00:39:21.318 sys 0m4.793s 00:39:21.318 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:21.318 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:21.318 ************************************ 00:39:21.319 END TEST nvmf_host_management 00:39:21.319 ************************************ 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:21.319 ************************************ 00:39:21.319 START TEST nvmf_lvol 00:39:21.319 ************************************ 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:21.319 * Looking for test storage... 00:39:21.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:21.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.319 --rc genhtml_branch_coverage=1 00:39:21.319 --rc genhtml_function_coverage=1 00:39:21.319 --rc genhtml_legend=1 00:39:21.319 --rc geninfo_all_blocks=1 00:39:21.319 --rc geninfo_unexecuted_blocks=1 00:39:21.319 00:39:21.319 ' 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:21.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.319 --rc genhtml_branch_coverage=1 00:39:21.319 --rc genhtml_function_coverage=1 00:39:21.319 --rc genhtml_legend=1 00:39:21.319 --rc geninfo_all_blocks=1 00:39:21.319 --rc geninfo_unexecuted_blocks=1 00:39:21.319 00:39:21.319 ' 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:21.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.319 --rc genhtml_branch_coverage=1 00:39:21.319 --rc genhtml_function_coverage=1 00:39:21.319 --rc genhtml_legend=1 00:39:21.319 --rc geninfo_all_blocks=1 00:39:21.319 --rc geninfo_unexecuted_blocks=1 00:39:21.319 00:39:21.319 ' 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:21.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.319 --rc genhtml_branch_coverage=1 00:39:21.319 --rc genhtml_function_coverage=1 00:39:21.319 --rc genhtml_legend=1 00:39:21.319 --rc geninfo_all_blocks=1 00:39:21.319 --rc geninfo_unexecuted_blocks=1 00:39:21.319 00:39:21.319 ' 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:21.319 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:39:21.320 01:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:23.219 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:23.219 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:39:23.219 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:23.219 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:23.219 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:23.219 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:23.219 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:23.219 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:39:23.219 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:23.219 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:39:23.219 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:39:23.219 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:39:23.219 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:39:23.219 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:39:23.219 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:39:23.219 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:23.219 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:23.219 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:23.220 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:23.220 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:23.220 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:23.220 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:23.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:23.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:39:23.220 00:39:23.220 --- 10.0.0.2 ping statistics --- 00:39:23.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:23.220 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:23.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:23.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:39:23.220 00:39:23.220 --- 10.0.0.1 ping statistics --- 00:39:23.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:23.220 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3161975 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3161975 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3161975 ']' 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:23.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:23.220 01:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:23.479 [2024-12-06 01:16:55.941371] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:23.479 [2024-12-06 01:16:55.944072] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:39:23.479 [2024-12-06 01:16:55.944189] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:23.479 [2024-12-06 01:16:56.097902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:23.738 [2024-12-06 01:16:56.239121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:23.738 [2024-12-06 01:16:56.239209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:23.739 [2024-12-06 01:16:56.239237] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:23.739 [2024-12-06 01:16:56.239259] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:23.739 [2024-12-06 01:16:56.239281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:23.739 [2024-12-06 01:16:56.241936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:23.739 [2024-12-06 01:16:56.241988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:23.739 [2024-12-06 01:16:56.241997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:23.996 [2024-12-06 01:16:56.614865] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:23.996 [2024-12-06 01:16:56.615965] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:23.996 [2024-12-06 01:16:56.616757] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:23.996 [2024-12-06 01:16:56.617087] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:24.254 01:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:24.254 01:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:39:24.254 01:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:24.254 01:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:24.254 01:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:24.254 01:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:24.254 01:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:24.513 [2024-12-06 01:16:57.187125] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:24.772 01:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:25.031 01:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:39:25.031 01:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:25.291 01:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:39:25.291 01:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:39:25.549 01:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:39:25.809 01:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b4205ed8-342a-4682-89c9-dfd9d31c7817 00:39:25.809 01:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b4205ed8-342a-4682-89c9-dfd9d31c7817 lvol 20 00:39:26.378 01:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6c73c40d-51b4-415b-8d63-a3b62cacb33d 00:39:26.379 01:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:26.638 01:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6c73c40d-51b4-415b-8d63-a3b62cacb33d 00:39:26.895 01:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:27.153 [2024-12-06 01:16:59.623383] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:27.153 01:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:27.416 01:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3162412 00:39:27.416 01:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:39:27.416 01:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:39:28.434 01:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 6c73c40d-51b4-415b-8d63-a3b62cacb33d MY_SNAPSHOT 00:39:28.691 01:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=270e3912-1576-4bd8-ab41-93b8830b3b15 00:39:28.692 01:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 6c73c40d-51b4-415b-8d63-a3b62cacb33d 30 00:39:28.949 01:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 270e3912-1576-4bd8-ab41-93b8830b3b15 MY_CLONE 00:39:29.206 01:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6bc12160-1976-4290-9b3f-c5f3ef89748a 00:39:29.206 01:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6bc12160-1976-4290-9b3f-c5f3ef89748a 00:39:30.139 01:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3162412 00:39:38.259 Initializing NVMe Controllers 00:39:38.259 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:38.259 Controller IO queue size 128, less than required. 00:39:38.259 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:38.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:39:38.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:39:38.259 Initialization complete. Launching workers. 00:39:38.259 ======================================================== 00:39:38.259 Latency(us) 00:39:38.259 Device Information : IOPS MiB/s Average min max 00:39:38.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8272.00 32.31 15478.36 594.21 180575.78 00:39:38.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8109.60 31.68 15789.17 3719.83 218576.13 00:39:38.259 ======================================================== 00:39:38.259 Total : 16381.60 63.99 15632.23 594.21 218576.13 00:39:38.259 00:39:38.259 01:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:38.259 01:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6c73c40d-51b4-415b-8d63-a3b62cacb33d 00:39:38.259 01:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b4205ed8-342a-4682-89c9-dfd9d31c7817 00:39:38.518 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:39:38.518 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:39:38.518 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:39:38.518 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:38.518 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:39:38.518 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:38.518 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:39:38.518 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:38.518 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:38.518 rmmod nvme_tcp 00:39:38.777 rmmod nvme_fabrics 00:39:38.778 rmmod nvme_keyring 00:39:38.778 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:38.778 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:39:38.778 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:39:38.778 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3161975 ']' 00:39:38.778 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3161975 00:39:38.778 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3161975 ']' 00:39:38.778 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3161975 00:39:38.778 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:39:38.778 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:38.778 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3161975 00:39:38.778 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:38.778 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:38.778 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3161975' 00:39:38.778 killing process with pid 3161975 00:39:38.778 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3161975 00:39:38.778 01:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3161975 00:39:40.160 01:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:40.160 01:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:40.160 01:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:40.160 01:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:39:40.160 01:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:39:40.160 01:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:40.160 01:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:39:40.160 01:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:40.160 01:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:40.161 01:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:40.161 01:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:40.161 01:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:42.694 01:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:42.694 00:39:42.694 real 0m21.287s 00:39:42.694 user 0m59.201s 00:39:42.694 sys 0m7.357s 00:39:42.694 01:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:42.694 01:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:42.694 ************************************ 00:39:42.694 END TEST nvmf_lvol 00:39:42.694 ************************************ 00:39:42.694 01:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:42.694 01:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:42.694 01:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:42.694 01:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:42.694 ************************************ 00:39:42.694 START TEST nvmf_lvs_grow 00:39:42.694 ************************************ 00:39:42.694 01:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:42.694 * Looking for test storage... 00:39:42.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:42.694 01:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:42.694 01:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:39:42.694 01:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:42.694 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:42.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:42.694 --rc genhtml_branch_coverage=1 00:39:42.695 --rc genhtml_function_coverage=1 00:39:42.695 --rc genhtml_legend=1 00:39:42.695 --rc geninfo_all_blocks=1 00:39:42.695 --rc geninfo_unexecuted_blocks=1 00:39:42.695 00:39:42.695 ' 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:42.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:42.695 --rc genhtml_branch_coverage=1 00:39:42.695 --rc genhtml_function_coverage=1 00:39:42.695 --rc genhtml_legend=1 00:39:42.695 --rc geninfo_all_blocks=1 00:39:42.695 --rc geninfo_unexecuted_blocks=1 00:39:42.695 00:39:42.695 ' 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:42.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:42.695 --rc genhtml_branch_coverage=1 00:39:42.695 --rc genhtml_function_coverage=1 00:39:42.695 --rc genhtml_legend=1 00:39:42.695 --rc geninfo_all_blocks=1 00:39:42.695 --rc geninfo_unexecuted_blocks=1 00:39:42.695 00:39:42.695 ' 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:42.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:42.695 --rc genhtml_branch_coverage=1 00:39:42.695 --rc genhtml_function_coverage=1 00:39:42.695 --rc genhtml_legend=1 00:39:42.695 --rc geninfo_all_blocks=1 00:39:42.695 --rc geninfo_unexecuted_blocks=1 00:39:42.695 00:39:42.695 ' 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:39:42.695 01:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:44.592 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:44.592 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:39:44.592 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:44.592 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:44.592 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:44.592 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:44.592 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:44.592 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:39:44.592 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:44.592 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:39:44.592 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:39:44.592 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:39:44.592 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:39:44.592 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:44.593 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:44.593 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:44.593 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:44.593 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:44.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:44.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:39:44.593 00:39:44.593 --- 10.0.0.2 ping statistics --- 00:39:44.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:44.593 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:44.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:44.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:39:44.593 00:39:44.593 --- 10.0.0.1 ping statistics --- 00:39:44.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:44.593 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:44.593 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:44.594 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:44.594 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:44.594 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:44.594 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:44.594 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:44.594 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:39:44.594 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:44.594 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:44.594 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:44.594 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3165866 00:39:44.594 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:44.594 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3165866 00:39:44.594 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3165866 ']' 00:39:44.594 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:44.594 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:44.594 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:44.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:44.594 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:44.594 01:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:44.594 [2024-12-06 01:17:17.268747] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:44.594 [2024-12-06 01:17:17.271389] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:39:44.594 [2024-12-06 01:17:17.271499] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:44.851 [2024-12-06 01:17:17.412181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:44.851 [2024-12-06 01:17:17.540357] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:44.851 [2024-12-06 01:17:17.540442] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:44.851 [2024-12-06 01:17:17.540471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:44.851 [2024-12-06 01:17:17.540492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:44.851 [2024-12-06 01:17:17.540513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:44.851 [2024-12-06 01:17:17.542143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:45.417 [2024-12-06 01:17:17.913912] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:45.417 [2024-12-06 01:17:17.914304] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:45.675 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:45.675 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:39:45.675 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:45.675 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:45.675 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:45.675 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:45.675 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:45.934 [2024-12-06 01:17:18.539168] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:45.934 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:39:45.934 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:45.934 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:45.934 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:45.934 ************************************ 00:39:45.934 START TEST lvs_grow_clean 00:39:45.934 ************************************ 00:39:45.934 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:39:45.934 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:45.934 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:45.934 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:45.934 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:45.934 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:45.934 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:45.934 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:45.934 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:45.934 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:46.194 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:46.194 01:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:46.452 01:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f858f83e-1db7-4146-9454-83a3dd5dc7b9 00:39:46.453 01:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f858f83e-1db7-4146-9454-83a3dd5dc7b9 00:39:46.453 01:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:47.018 01:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:47.018 01:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:47.018 01:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f858f83e-1db7-4146-9454-83a3dd5dc7b9 lvol 150 00:39:47.276 01:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e992668f-c57f-4078-a906-44e8b3d691b9 00:39:47.276 01:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:47.276 01:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:47.536 [2024-12-06 01:17:20.003052] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:47.536 [2024-12-06 01:17:20.003198] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:47.536 true 00:39:47.536 01:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f858f83e-1db7-4146-9454-83a3dd5dc7b9 00:39:47.536 01:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:47.795 01:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:47.795 01:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:48.055 01:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e992668f-c57f-4078-a906-44e8b3d691b9 00:39:48.313 01:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:48.572 [2024-12-06 01:17:21.151439] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:48.572 01:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:48.831 01:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3166365 00:39:48.831 01:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:48.831 01:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:48.831 01:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3166365 /var/tmp/bdevperf.sock 00:39:48.831 01:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3166365 ']' 00:39:48.831 01:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:48.831 01:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:48.831 01:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:48.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:48.831 01:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:48.831 01:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:48.831 [2024-12-06 01:17:21.522246] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:39:48.831 [2024-12-06 01:17:21.522386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3166365 ] 00:39:49.089 [2024-12-06 01:17:21.657992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:49.089 [2024-12-06 01:17:21.779789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:50.026 01:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:50.026 01:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:39:50.026 01:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:50.285 Nvme0n1 00:39:50.285 01:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:50.544 [ 00:39:50.544 { 00:39:50.545 "name": "Nvme0n1", 00:39:50.545 "aliases": [ 00:39:50.545 "e992668f-c57f-4078-a906-44e8b3d691b9" 00:39:50.545 ], 00:39:50.545 "product_name": "NVMe disk", 00:39:50.545 "block_size": 4096, 00:39:50.545 "num_blocks": 38912, 00:39:50.545 "uuid": "e992668f-c57f-4078-a906-44e8b3d691b9", 00:39:50.545 "numa_id": 0, 00:39:50.545 "assigned_rate_limits": { 00:39:50.545 "rw_ios_per_sec": 0, 00:39:50.545 "rw_mbytes_per_sec": 0, 00:39:50.545 "r_mbytes_per_sec": 0, 00:39:50.545 "w_mbytes_per_sec": 0 00:39:50.545 }, 00:39:50.545 "claimed": false, 00:39:50.545 "zoned": false, 00:39:50.545 "supported_io_types": { 00:39:50.545 "read": true, 00:39:50.545 "write": true, 00:39:50.545 "unmap": true, 00:39:50.545 "flush": true, 00:39:50.545 "reset": true, 00:39:50.545 "nvme_admin": true, 00:39:50.545 "nvme_io": true, 00:39:50.545 "nvme_io_md": false, 00:39:50.545 "write_zeroes": true, 00:39:50.545 "zcopy": false, 00:39:50.545 "get_zone_info": false, 00:39:50.545 "zone_management": false, 00:39:50.545 "zone_append": false, 00:39:50.545 "compare": true, 00:39:50.545 "compare_and_write": true, 00:39:50.545 "abort": true, 00:39:50.545 "seek_hole": false, 00:39:50.545 "seek_data": false, 00:39:50.545 "copy": true, 00:39:50.545 "nvme_iov_md": false 00:39:50.545 }, 00:39:50.545 "memory_domains": [ 00:39:50.545 { 00:39:50.545 "dma_device_id": "system", 00:39:50.545 "dma_device_type": 1 00:39:50.545 } 00:39:50.545 ], 00:39:50.545 "driver_specific": { 00:39:50.545 "nvme": [ 00:39:50.545 { 00:39:50.545 "trid": { 00:39:50.545 "trtype": "TCP", 00:39:50.545 "adrfam": "IPv4", 00:39:50.545 "traddr": "10.0.0.2", 00:39:50.545 "trsvcid": "4420", 00:39:50.545 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:50.545 }, 00:39:50.545 "ctrlr_data": { 00:39:50.545 "cntlid": 1, 00:39:50.545 "vendor_id": "0x8086", 00:39:50.545 "model_number": "SPDK bdev Controller", 00:39:50.545 "serial_number": "SPDK0", 00:39:50.545 "firmware_revision": "25.01", 00:39:50.545 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:50.545 "oacs": { 00:39:50.545 "security": 0, 00:39:50.545 "format": 0, 00:39:50.545 "firmware": 0, 00:39:50.545 "ns_manage": 0 00:39:50.545 }, 00:39:50.545 "multi_ctrlr": true, 00:39:50.545 "ana_reporting": false 00:39:50.545 }, 00:39:50.545 "vs": { 00:39:50.545 "nvme_version": "1.3" 00:39:50.545 }, 00:39:50.545 "ns_data": { 00:39:50.545 "id": 1, 00:39:50.545 "can_share": true 00:39:50.545 } 00:39:50.545 } 00:39:50.545 ], 00:39:50.545 "mp_policy": "active_passive" 00:39:50.545 } 00:39:50.545 } 00:39:50.545 ] 00:39:50.545 01:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3166620 00:39:50.545 01:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:50.545 01:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:50.545 Running I/O for 10 seconds... 00:39:51.922 Latency(us) 00:39:51.922 [2024-12-06T00:17:24.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:51.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:51.922 Nvme0n1 : 1.00 10575.00 41.31 0.00 0.00 0.00 0.00 0.00 00:39:51.922 [2024-12-06T00:17:24.631Z] =================================================================================================================== 00:39:51.922 [2024-12-06T00:17:24.631Z] Total : 10575.00 41.31 0.00 0.00 0.00 0.00 0.00 00:39:51.922 00:39:52.491 01:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f858f83e-1db7-4146-9454-83a3dd5dc7b9 00:39:52.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:52.749 Nvme0n1 : 2.00 10748.50 41.99 0.00 0.00 0.00 0.00 0.00 00:39:52.749 [2024-12-06T00:17:25.458Z] =================================================================================================================== 00:39:52.749 [2024-12-06T00:17:25.458Z] Total : 10748.50 41.99 0.00 0.00 0.00 0.00 0.00 00:39:52.749 00:39:52.749 true 00:39:52.749 01:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f858f83e-1db7-4146-9454-83a3dd5dc7b9 00:39:52.749 01:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:53.315 01:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:53.315 01:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:53.315 01:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3166620 00:39:53.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:53.573 Nvme0n1 : 3.00 10785.33 42.13 0.00 0.00 0.00 0.00 0.00 00:39:53.573 [2024-12-06T00:17:26.282Z] =================================================================================================================== 00:39:53.573 [2024-12-06T00:17:26.282Z] Total : 10785.33 42.13 0.00 0.00 0.00 0.00 0.00 00:39:53.573 00:39:54.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:54.951 Nvme0n1 : 4.00 10835.25 42.33 0.00 0.00 0.00 0.00 0.00 00:39:54.951 [2024-12-06T00:17:27.660Z] =================================================================================================================== 00:39:54.951 [2024-12-06T00:17:27.660Z] Total : 10835.25 42.33 0.00 0.00 0.00 0.00 0.00 00:39:54.951 00:39:55.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:55.890 Nvme0n1 : 5.00 10852.60 42.39 0.00 0.00 0.00 0.00 0.00 00:39:55.890 [2024-12-06T00:17:28.599Z] =================================================================================================================== 00:39:55.890 [2024-12-06T00:17:28.599Z] Total : 10852.60 42.39 0.00 0.00 0.00 0.00 0.00 00:39:55.890 00:39:56.829 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:56.829 Nvme0n1 : 6.00 10853.67 42.40 0.00 0.00 0.00 0.00 0.00 00:39:56.829 [2024-12-06T00:17:29.538Z] =================================================================================================================== 00:39:56.829 [2024-12-06T00:17:29.538Z] Total : 10853.67 42.40 0.00 0.00 0.00 0.00 0.00 00:39:56.829 00:39:57.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:57.769 Nvme0n1 : 7.00 10859.14 42.42 0.00 0.00 0.00 0.00 0.00 00:39:57.769 [2024-12-06T00:17:30.478Z] =================================================================================================================== 00:39:57.769 [2024-12-06T00:17:30.478Z] Total : 10859.14 42.42 0.00 0.00 0.00 0.00 0.00 00:39:57.769 00:39:58.707 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:58.707 Nvme0n1 : 8.00 10882.88 42.51 0.00 0.00 0.00 0.00 0.00 00:39:58.707 [2024-12-06T00:17:31.416Z] =================================================================================================================== 00:39:58.707 [2024-12-06T00:17:31.416Z] Total : 10882.88 42.51 0.00 0.00 0.00 0.00 0.00 00:39:58.707 00:39:59.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:59.698 Nvme0n1 : 9.00 10901.33 42.58 0.00 0.00 0.00 0.00 0.00 00:39:59.698 [2024-12-06T00:17:32.407Z] =================================================================================================================== 00:39:59.698 [2024-12-06T00:17:32.407Z] Total : 10901.33 42.58 0.00 0.00 0.00 0.00 0.00 00:39:59.698 00:40:00.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:00.688 Nvme0n1 : 10.00 10928.80 42.69 0.00 0.00 0.00 0.00 0.00 00:40:00.688 [2024-12-06T00:17:33.397Z] =================================================================================================================== 00:40:00.688 [2024-12-06T00:17:33.397Z] Total : 10928.80 42.69 0.00 0.00 0.00 0.00 0.00 00:40:00.688 00:40:00.688 00:40:00.688 Latency(us) 00:40:00.688 [2024-12-06T00:17:33.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:00.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:00.688 Nvme0n1 : 10.01 10929.14 42.69 0.00 0.00 11704.62 5461.33 26408.58 00:40:00.688 [2024-12-06T00:17:33.397Z] =================================================================================================================== 00:40:00.688 [2024-12-06T00:17:33.397Z] Total : 10929.14 42.69 0.00 0.00 11704.62 5461.33 26408.58 00:40:00.688 { 00:40:00.688 "results": [ 00:40:00.688 { 00:40:00.688 "job": "Nvme0n1", 00:40:00.688 "core_mask": "0x2", 00:40:00.688 "workload": "randwrite", 00:40:00.688 "status": "finished", 00:40:00.688 "queue_depth": 128, 00:40:00.688 "io_size": 4096, 00:40:00.688 "runtime": 10.011399, 00:40:00.688 "iops": 10929.14187118104, 00:40:00.688 "mibps": 42.69196043430094, 00:40:00.688 "io_failed": 0, 00:40:00.689 "io_timeout": 0, 00:40:00.689 "avg_latency_us": 11704.621006014422, 00:40:00.689 "min_latency_us": 5461.333333333333, 00:40:00.689 "max_latency_us": 26408.58074074074 00:40:00.689 } 00:40:00.689 ], 00:40:00.689 "core_count": 1 00:40:00.689 } 00:40:00.689 01:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3166365 00:40:00.689 01:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3166365 ']' 00:40:00.689 01:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3166365 00:40:00.689 01:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:40:00.689 01:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:00.689 01:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3166365 00:40:00.689 01:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:00.689 01:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:00.689 01:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3166365' 00:40:00.689 killing process with pid 3166365 00:40:00.689 01:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3166365 00:40:00.689 Received shutdown signal, test time was about 10.000000 seconds 00:40:00.689 00:40:00.689 Latency(us) 00:40:00.689 [2024-12-06T00:17:33.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:00.689 [2024-12-06T00:17:33.398Z] =================================================================================================================== 00:40:00.689 [2024-12-06T00:17:33.398Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:00.689 01:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3166365 00:40:01.621 01:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:01.880 01:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:02.139 01:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f858f83e-1db7-4146-9454-83a3dd5dc7b9 00:40:02.139 01:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:02.706 01:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:02.706 01:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:40:02.706 01:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:02.964 [2024-12-06 01:17:35.459144] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:02.964 01:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f858f83e-1db7-4146-9454-83a3dd5dc7b9 00:40:02.964 01:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:40:02.964 01:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f858f83e-1db7-4146-9454-83a3dd5dc7b9 00:40:02.964 01:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:02.964 01:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:02.964 01:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:02.964 01:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:02.964 01:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:02.964 01:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:02.964 01:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:02.964 01:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:02.964 01:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f858f83e-1db7-4146-9454-83a3dd5dc7b9 00:40:03.221 request: 00:40:03.221 { 00:40:03.221 "uuid": "f858f83e-1db7-4146-9454-83a3dd5dc7b9", 00:40:03.221 "method": "bdev_lvol_get_lvstores", 00:40:03.221 "req_id": 1 00:40:03.221 } 00:40:03.221 Got JSON-RPC error response 00:40:03.221 response: 00:40:03.221 { 00:40:03.221 "code": -19, 00:40:03.221 "message": "No such device" 00:40:03.221 } 00:40:03.221 01:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:40:03.221 01:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:03.221 01:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:03.221 01:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:03.221 01:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:03.478 aio_bdev 00:40:03.478 01:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e992668f-c57f-4078-a906-44e8b3d691b9 00:40:03.478 01:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=e992668f-c57f-4078-a906-44e8b3d691b9 00:40:03.478 01:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:03.478 01:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:40:03.478 01:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:03.478 01:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:03.478 01:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:03.736 01:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e992668f-c57f-4078-a906-44e8b3d691b9 -t 2000 00:40:03.993 [ 00:40:03.993 { 00:40:03.993 "name": "e992668f-c57f-4078-a906-44e8b3d691b9", 00:40:03.993 "aliases": [ 00:40:03.993 "lvs/lvol" 00:40:03.993 ], 00:40:03.993 "product_name": "Logical Volume", 00:40:03.993 "block_size": 4096, 00:40:03.993 "num_blocks": 38912, 00:40:03.993 "uuid": "e992668f-c57f-4078-a906-44e8b3d691b9", 00:40:03.993 "assigned_rate_limits": { 00:40:03.993 "rw_ios_per_sec": 0, 00:40:03.993 "rw_mbytes_per_sec": 0, 00:40:03.993 "r_mbytes_per_sec": 0, 00:40:03.993 "w_mbytes_per_sec": 0 00:40:03.993 }, 00:40:03.993 "claimed": false, 00:40:03.993 "zoned": false, 00:40:03.993 "supported_io_types": { 00:40:03.993 "read": true, 00:40:03.993 "write": true, 00:40:03.993 "unmap": true, 00:40:03.993 "flush": false, 00:40:03.993 "reset": true, 00:40:03.993 "nvme_admin": false, 00:40:03.993 "nvme_io": false, 00:40:03.993 "nvme_io_md": false, 00:40:03.993 "write_zeroes": true, 00:40:03.993 "zcopy": false, 00:40:03.993 "get_zone_info": false, 00:40:03.993 "zone_management": false, 00:40:03.993 "zone_append": false, 00:40:03.993 "compare": false, 00:40:03.993 "compare_and_write": false, 00:40:03.993 "abort": false, 00:40:03.993 "seek_hole": true, 00:40:03.993 "seek_data": true, 00:40:03.993 "copy": false, 00:40:03.993 "nvme_iov_md": false 00:40:03.993 }, 00:40:03.993 "driver_specific": { 00:40:03.993 "lvol": { 00:40:03.993 "lvol_store_uuid": "f858f83e-1db7-4146-9454-83a3dd5dc7b9", 00:40:03.993 "base_bdev": "aio_bdev", 00:40:03.993 "thin_provision": false, 00:40:03.993 "num_allocated_clusters": 38, 00:40:03.993 "snapshot": false, 00:40:03.993 "clone": false, 00:40:03.993 "esnap_clone": false 00:40:03.993 } 00:40:03.993 } 00:40:03.993 } 00:40:03.993 ] 00:40:03.993 01:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:40:03.993 01:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f858f83e-1db7-4146-9454-83a3dd5dc7b9 00:40:03.993 01:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:04.251 01:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:04.511 01:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f858f83e-1db7-4146-9454-83a3dd5dc7b9 00:40:04.511 01:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:04.770 01:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:04.770 01:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e992668f-c57f-4078-a906-44e8b3d691b9 00:40:05.027 01:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f858f83e-1db7-4146-9454-83a3dd5dc7b9 00:40:05.284 01:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:05.540 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:05.540 00:40:05.540 real 0m19.643s 00:40:05.540 user 0m19.372s 00:40:05.540 sys 0m1.957s 00:40:05.540 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:05.540 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:05.540 ************************************ 00:40:05.540 END TEST lvs_grow_clean 00:40:05.540 ************************************ 00:40:05.540 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:40:05.540 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:05.540 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:05.540 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:05.797 ************************************ 00:40:05.797 START TEST lvs_grow_dirty 00:40:05.797 ************************************ 00:40:05.797 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:40:05.797 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:05.797 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:05.797 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:05.797 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:05.797 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:05.797 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:05.797 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:05.797 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:05.797 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:06.054 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:06.054 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:06.312 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fd81bded-2817-468d-8879-9b6223ce1734 00:40:06.312 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd81bded-2817-468d-8879-9b6223ce1734 00:40:06.312 01:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:06.570 01:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:06.570 01:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:06.570 01:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fd81bded-2817-468d-8879-9b6223ce1734 lvol 150 00:40:06.829 01:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1cc38398-2077-44e0-ae2a-886bd3ae9819 00:40:06.829 01:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:06.829 01:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:07.090 [2024-12-06 01:17:39.743023] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:07.090 [2024-12-06 01:17:39.743162] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:07.090 true 00:40:07.090 01:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd81bded-2817-468d-8879-9b6223ce1734 00:40:07.090 01:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:07.348 01:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:07.348 01:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:07.607 01:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1cc38398-2077-44e0-ae2a-886bd3ae9819 00:40:08.173 01:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:08.173 [2024-12-06 01:17:40.847407] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:08.173 01:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:08.741 01:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3168719 00:40:08.741 01:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:08.741 01:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:08.741 01:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3168719 /var/tmp/bdevperf.sock 00:40:08.741 01:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3168719 ']' 00:40:08.741 01:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:08.741 01:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:08.741 01:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:08.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:08.741 01:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:08.741 01:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:08.741 [2024-12-06 01:17:41.222540] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:40:08.741 [2024-12-06 01:17:41.222678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3168719 ] 00:40:08.741 [2024-12-06 01:17:41.362940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:09.001 [2024-12-06 01:17:41.500671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:09.568 01:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:09.568 01:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:40:09.568 01:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:10.136 Nvme0n1 00:40:10.136 01:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:10.396 [ 00:40:10.396 { 00:40:10.396 "name": "Nvme0n1", 00:40:10.396 "aliases": [ 00:40:10.396 "1cc38398-2077-44e0-ae2a-886bd3ae9819" 00:40:10.396 ], 00:40:10.396 "product_name": "NVMe disk", 00:40:10.396 "block_size": 4096, 00:40:10.396 "num_blocks": 38912, 00:40:10.396 "uuid": "1cc38398-2077-44e0-ae2a-886bd3ae9819", 00:40:10.396 "numa_id": 0, 00:40:10.396 "assigned_rate_limits": { 00:40:10.396 "rw_ios_per_sec": 0, 00:40:10.396 "rw_mbytes_per_sec": 0, 00:40:10.396 "r_mbytes_per_sec": 0, 00:40:10.396 "w_mbytes_per_sec": 0 00:40:10.396 }, 00:40:10.396 "claimed": false, 00:40:10.396 "zoned": false, 00:40:10.396 "supported_io_types": { 00:40:10.396 "read": true, 00:40:10.396 "write": true, 00:40:10.396 "unmap": true, 00:40:10.396 "flush": true, 00:40:10.396 "reset": true, 00:40:10.396 "nvme_admin": true, 00:40:10.396 "nvme_io": true, 00:40:10.396 "nvme_io_md": false, 00:40:10.396 "write_zeroes": true, 00:40:10.396 "zcopy": false, 00:40:10.396 "get_zone_info": false, 00:40:10.396 "zone_management": false, 00:40:10.396 "zone_append": false, 00:40:10.396 "compare": true, 00:40:10.396 "compare_and_write": true, 00:40:10.396 "abort": true, 00:40:10.396 "seek_hole": false, 00:40:10.396 "seek_data": false, 00:40:10.396 "copy": true, 00:40:10.396 "nvme_iov_md": false 00:40:10.396 }, 00:40:10.396 "memory_domains": [ 00:40:10.396 { 00:40:10.396 "dma_device_id": "system", 00:40:10.396 "dma_device_type": 1 00:40:10.396 } 00:40:10.396 ], 00:40:10.396 "driver_specific": { 00:40:10.396 "nvme": [ 00:40:10.396 { 00:40:10.396 "trid": { 00:40:10.396 "trtype": "TCP", 00:40:10.396 "adrfam": "IPv4", 00:40:10.396 "traddr": "10.0.0.2", 00:40:10.396 "trsvcid": "4420", 00:40:10.396 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:10.396 }, 00:40:10.396 "ctrlr_data": { 00:40:10.396 "cntlid": 1, 00:40:10.396 "vendor_id": "0x8086", 00:40:10.396 "model_number": "SPDK bdev Controller", 00:40:10.396 "serial_number": "SPDK0", 00:40:10.396 "firmware_revision": "25.01", 00:40:10.396 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:10.396 "oacs": { 00:40:10.396 "security": 0, 00:40:10.396 "format": 0, 00:40:10.396 "firmware": 0, 00:40:10.396 "ns_manage": 0 00:40:10.396 }, 00:40:10.396 "multi_ctrlr": true, 00:40:10.396 "ana_reporting": false 00:40:10.396 }, 00:40:10.396 "vs": { 00:40:10.396 "nvme_version": "1.3" 00:40:10.396 }, 00:40:10.396 "ns_data": { 00:40:10.397 "id": 1, 00:40:10.397 "can_share": true 00:40:10.397 } 00:40:10.397 } 00:40:10.397 ], 00:40:10.397 "mp_policy": "active_passive" 00:40:10.397 } 00:40:10.397 } 00:40:10.397 ] 00:40:10.397 01:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3168920 00:40:10.397 01:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:10.397 01:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:10.397 Running I/O for 10 seconds... 00:40:11.336 Latency(us) 00:40:11.336 [2024-12-06T00:17:44.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:11.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:11.336 Nvme0n1 : 1.00 9886.00 38.62 0.00 0.00 0.00 0.00 0.00 00:40:11.336 [2024-12-06T00:17:44.045Z] =================================================================================================================== 00:40:11.336 [2024-12-06T00:17:44.045Z] Total : 9886.00 38.62 0.00 0.00 0.00 0.00 0.00 00:40:11.336 00:40:12.273 01:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fd81bded-2817-468d-8879-9b6223ce1734 00:40:12.531 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:12.531 Nvme0n1 : 2.00 10047.00 39.25 0.00 0.00 0.00 0.00 0.00 00:40:12.531 [2024-12-06T00:17:45.240Z] =================================================================================================================== 00:40:12.531 [2024-12-06T00:17:45.241Z] Total : 10047.00 39.25 0.00 0.00 0.00 0.00 0.00 00:40:12.532 00:40:12.532 true 00:40:12.532 01:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd81bded-2817-468d-8879-9b6223ce1734 00:40:12.532 01:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:12.790 01:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:12.790 01:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:12.790 01:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3168920 00:40:13.354 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:13.354 Nvme0n1 : 3.00 10095.33 39.43 0.00 0.00 0.00 0.00 0.00 00:40:13.354 [2024-12-06T00:17:46.063Z] =================================================================================================================== 00:40:13.354 [2024-12-06T00:17:46.063Z] Total : 10095.33 39.43 0.00 0.00 0.00 0.00 0.00 00:40:13.354 00:40:14.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:14.289 Nvme0n1 : 4.00 10155.50 39.67 0.00 0.00 0.00 0.00 0.00 00:40:14.289 [2024-12-06T00:17:46.998Z] =================================================================================================================== 00:40:14.289 [2024-12-06T00:17:46.998Z] Total : 10155.50 39.67 0.00 0.00 0.00 0.00 0.00 00:40:14.289 00:40:15.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:15.666 Nvme0n1 : 5.00 10198.00 39.84 0.00 0.00 0.00 0.00 0.00 00:40:15.666 [2024-12-06T00:17:48.375Z] =================================================================================================================== 00:40:15.666 [2024-12-06T00:17:48.375Z] Total : 10198.00 39.84 0.00 0.00 0.00 0.00 0.00 00:40:15.666 00:40:16.606 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:16.606 Nvme0n1 : 6.00 10234.33 39.98 0.00 0.00 0.00 0.00 0.00 00:40:16.606 [2024-12-06T00:17:49.315Z] =================================================================================================================== 00:40:16.606 [2024-12-06T00:17:49.315Z] Total : 10234.33 39.98 0.00 0.00 0.00 0.00 0.00 00:40:16.606 00:40:17.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:17.543 Nvme0n1 : 7.00 10290.00 40.20 0.00 0.00 0.00 0.00 0.00 00:40:17.543 [2024-12-06T00:17:50.252Z] =================================================================================================================== 00:40:17.543 [2024-12-06T00:17:50.252Z] Total : 10290.00 40.20 0.00 0.00 0.00 0.00 0.00 00:40:17.543 00:40:18.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:18.479 Nvme0n1 : 8.00 10349.75 40.43 0.00 0.00 0.00 0.00 0.00 00:40:18.479 [2024-12-06T00:17:51.188Z] =================================================================================================================== 00:40:18.479 [2024-12-06T00:17:51.188Z] Total : 10349.75 40.43 0.00 0.00 0.00 0.00 0.00 00:40:18.479 00:40:19.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:19.418 Nvme0n1 : 9.00 10366.00 40.49 0.00 0.00 0.00 0.00 0.00 00:40:19.418 [2024-12-06T00:17:52.127Z] =================================================================================================================== 00:40:19.418 [2024-12-06T00:17:52.127Z] Total : 10366.00 40.49 0.00 0.00 0.00 0.00 0.00 00:40:19.418 00:40:20.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:20.352 Nvme0n1 : 10.00 10390.20 40.59 0.00 0.00 0.00 0.00 0.00 00:40:20.352 [2024-12-06T00:17:53.061Z] =================================================================================================================== 00:40:20.352 [2024-12-06T00:17:53.061Z] Total : 10390.20 40.59 0.00 0.00 0.00 0.00 0.00 00:40:20.352 00:40:20.352 00:40:20.353 Latency(us) 00:40:20.353 [2024-12-06T00:17:53.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:20.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:20.353 Nvme0n1 : 10.01 10390.83 40.59 0.00 0.00 12306.78 3446.71 17185.00 00:40:20.353 [2024-12-06T00:17:53.062Z] =================================================================================================================== 00:40:20.353 [2024-12-06T00:17:53.062Z] Total : 10390.83 40.59 0.00 0.00 12306.78 3446.71 17185.00 00:40:20.353 { 00:40:20.353 "results": [ 00:40:20.353 { 00:40:20.353 "job": "Nvme0n1", 00:40:20.353 "core_mask": "0x2", 00:40:20.353 "workload": "randwrite", 00:40:20.353 "status": "finished", 00:40:20.353 "queue_depth": 128, 00:40:20.353 "io_size": 4096, 00:40:20.353 "runtime": 10.011713, 00:40:20.353 "iops": 10390.829221732585, 00:40:20.353 "mibps": 40.58917664739291, 00:40:20.353 "io_failed": 0, 00:40:20.353 "io_timeout": 0, 00:40:20.353 "avg_latency_us": 12306.782385536935, 00:40:20.353 "min_latency_us": 3446.708148148148, 00:40:20.353 "max_latency_us": 17184.995555555557 00:40:20.353 } 00:40:20.353 ], 00:40:20.353 "core_count": 1 00:40:20.353 } 00:40:20.353 01:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3168719 00:40:20.353 01:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3168719 ']' 00:40:20.353 01:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3168719 00:40:20.353 01:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:40:20.353 01:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:20.353 01:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3168719 00:40:20.612 01:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:20.612 01:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:20.612 01:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3168719' 00:40:20.612 killing process with pid 3168719 00:40:20.612 01:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3168719 00:40:20.612 Received shutdown signal, test time was about 10.000000 seconds 00:40:20.612 00:40:20.612 Latency(us) 00:40:20.612 [2024-12-06T00:17:53.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:20.612 [2024-12-06T00:17:53.321Z] =================================================================================================================== 00:40:20.612 [2024-12-06T00:17:53.321Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:20.612 01:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3168719 00:40:21.546 01:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:21.547 01:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:22.111 01:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd81bded-2817-468d-8879-9b6223ce1734 00:40:22.111 01:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:22.369 01:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:22.369 01:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:40:22.369 01:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3165866 00:40:22.369 01:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3165866 00:40:22.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3165866 Killed "${NVMF_APP[@]}" "$@" 00:40:22.369 01:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:40:22.369 01:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:40:22.369 01:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:22.369 01:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:22.369 01:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:22.369 01:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3170331 00:40:22.369 01:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:22.369 01:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3170331 00:40:22.369 01:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3170331 ']' 00:40:22.369 01:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:22.369 01:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:22.369 01:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:22.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:22.369 01:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:22.369 01:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:22.369 [2024-12-06 01:17:55.042350] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:22.369 [2024-12-06 01:17:55.044783] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:40:22.369 [2024-12-06 01:17:55.044921] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:22.626 [2024-12-06 01:17:55.200793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:22.883 [2024-12-06 01:17:55.336179] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:22.883 [2024-12-06 01:17:55.336251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:22.883 [2024-12-06 01:17:55.336280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:22.883 [2024-12-06 01:17:55.336301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:22.883 [2024-12-06 01:17:55.336323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:22.883 [2024-12-06 01:17:55.337930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:23.139 [2024-12-06 01:17:55.710925] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:23.139 [2024-12-06 01:17:55.711312] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:23.396 01:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:23.396 01:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:40:23.396 01:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:23.396 01:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:23.396 01:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:23.396 01:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:23.396 01:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:23.654 [2024-12-06 01:17:56.281941] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:40:23.654 [2024-12-06 01:17:56.282194] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:40:23.654 [2024-12-06 01:17:56.282268] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:40:23.654 01:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:40:23.654 01:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1cc38398-2077-44e0-ae2a-886bd3ae9819 00:40:23.654 01:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1cc38398-2077-44e0-ae2a-886bd3ae9819 00:40:23.654 01:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:23.654 01:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:40:23.654 01:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:23.654 01:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:23.654 01:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:24.220 01:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1cc38398-2077-44e0-ae2a-886bd3ae9819 -t 2000 00:40:24.220 [ 00:40:24.220 { 00:40:24.220 "name": "1cc38398-2077-44e0-ae2a-886bd3ae9819", 00:40:24.220 "aliases": [ 00:40:24.220 "lvs/lvol" 00:40:24.220 ], 00:40:24.220 "product_name": "Logical Volume", 00:40:24.220 "block_size": 4096, 00:40:24.220 "num_blocks": 38912, 00:40:24.220 "uuid": "1cc38398-2077-44e0-ae2a-886bd3ae9819", 00:40:24.220 "assigned_rate_limits": { 00:40:24.220 "rw_ios_per_sec": 0, 00:40:24.220 "rw_mbytes_per_sec": 0, 00:40:24.220 "r_mbytes_per_sec": 0, 00:40:24.220 "w_mbytes_per_sec": 0 00:40:24.220 }, 00:40:24.220 "claimed": false, 00:40:24.220 "zoned": false, 00:40:24.220 "supported_io_types": { 00:40:24.220 "read": true, 00:40:24.220 "write": true, 00:40:24.220 "unmap": true, 00:40:24.220 "flush": false, 00:40:24.220 "reset": true, 00:40:24.220 "nvme_admin": false, 00:40:24.220 "nvme_io": false, 00:40:24.220 "nvme_io_md": false, 00:40:24.220 "write_zeroes": true, 00:40:24.220 "zcopy": false, 00:40:24.220 "get_zone_info": false, 00:40:24.220 "zone_management": false, 00:40:24.220 "zone_append": false, 00:40:24.220 "compare": false, 00:40:24.220 "compare_and_write": false, 00:40:24.220 "abort": false, 00:40:24.220 "seek_hole": true, 00:40:24.220 "seek_data": true, 00:40:24.220 "copy": false, 00:40:24.220 "nvme_iov_md": false 00:40:24.220 }, 00:40:24.220 "driver_specific": { 00:40:24.220 "lvol": { 00:40:24.220 "lvol_store_uuid": "fd81bded-2817-468d-8879-9b6223ce1734", 00:40:24.220 "base_bdev": "aio_bdev", 00:40:24.220 "thin_provision": false, 00:40:24.220 "num_allocated_clusters": 38, 00:40:24.220 "snapshot": false, 00:40:24.220 "clone": false, 00:40:24.220 "esnap_clone": false 00:40:24.220 } 00:40:24.220 } 00:40:24.220 } 00:40:24.220 ] 00:40:24.220 01:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:40:24.221 01:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd81bded-2817-468d-8879-9b6223ce1734 00:40:24.221 01:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:40:24.480 01:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:40:24.480 01:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd81bded-2817-468d-8879-9b6223ce1734 00:40:24.480 01:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:40:24.768 01:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:40:24.768 01:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:25.364 [2024-12-06 01:17:57.762995] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:25.364 01:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd81bded-2817-468d-8879-9b6223ce1734 00:40:25.364 01:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:40:25.364 01:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd81bded-2817-468d-8879-9b6223ce1734 00:40:25.364 01:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:25.364 01:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:25.364 01:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:25.364 01:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:25.364 01:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:25.364 01:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:25.364 01:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:25.364 01:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:25.364 01:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd81bded-2817-468d-8879-9b6223ce1734 00:40:25.623 request: 00:40:25.623 { 00:40:25.623 "uuid": "fd81bded-2817-468d-8879-9b6223ce1734", 00:40:25.623 "method": "bdev_lvol_get_lvstores", 00:40:25.623 "req_id": 1 00:40:25.623 } 00:40:25.623 Got JSON-RPC error response 00:40:25.623 response: 00:40:25.623 { 00:40:25.623 "code": -19, 00:40:25.623 "message": "No such device" 00:40:25.623 } 00:40:25.623 01:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:40:25.623 01:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:25.623 01:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:25.623 01:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:25.623 01:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:25.882 aio_bdev 00:40:25.882 01:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1cc38398-2077-44e0-ae2a-886bd3ae9819 00:40:25.882 01:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1cc38398-2077-44e0-ae2a-886bd3ae9819 00:40:25.882 01:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:25.882 01:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:40:25.882 01:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:25.882 01:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:25.882 01:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:26.141 01:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1cc38398-2077-44e0-ae2a-886bd3ae9819 -t 2000 00:40:26.399 [ 00:40:26.399 { 00:40:26.400 "name": "1cc38398-2077-44e0-ae2a-886bd3ae9819", 00:40:26.400 "aliases": [ 00:40:26.400 "lvs/lvol" 00:40:26.400 ], 00:40:26.400 "product_name": "Logical Volume", 00:40:26.400 "block_size": 4096, 00:40:26.400 "num_blocks": 38912, 00:40:26.400 "uuid": "1cc38398-2077-44e0-ae2a-886bd3ae9819", 00:40:26.400 "assigned_rate_limits": { 00:40:26.400 "rw_ios_per_sec": 0, 00:40:26.400 "rw_mbytes_per_sec": 0, 00:40:26.400 "r_mbytes_per_sec": 0, 00:40:26.400 "w_mbytes_per_sec": 0 00:40:26.400 }, 00:40:26.400 "claimed": false, 00:40:26.400 "zoned": false, 00:40:26.400 "supported_io_types": { 00:40:26.400 "read": true, 00:40:26.400 "write": true, 00:40:26.400 "unmap": true, 00:40:26.400 "flush": false, 00:40:26.400 "reset": true, 00:40:26.400 "nvme_admin": false, 00:40:26.400 "nvme_io": false, 00:40:26.400 "nvme_io_md": false, 00:40:26.400 "write_zeroes": true, 00:40:26.400 "zcopy": false, 00:40:26.400 "get_zone_info": false, 00:40:26.400 "zone_management": false, 00:40:26.400 "zone_append": false, 00:40:26.400 "compare": false, 00:40:26.400 "compare_and_write": false, 00:40:26.400 "abort": false, 00:40:26.400 "seek_hole": true, 00:40:26.400 "seek_data": true, 00:40:26.400 "copy": false, 00:40:26.400 "nvme_iov_md": false 00:40:26.400 }, 00:40:26.400 "driver_specific": { 00:40:26.400 "lvol": { 00:40:26.400 "lvol_store_uuid": "fd81bded-2817-468d-8879-9b6223ce1734", 00:40:26.400 "base_bdev": "aio_bdev", 00:40:26.400 "thin_provision": false, 00:40:26.400 "num_allocated_clusters": 38, 00:40:26.400 "snapshot": false, 00:40:26.400 "clone": false, 00:40:26.400 "esnap_clone": false 00:40:26.400 } 00:40:26.400 } 00:40:26.400 } 00:40:26.400 ] 00:40:26.400 01:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:40:26.400 01:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd81bded-2817-468d-8879-9b6223ce1734 00:40:26.400 01:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:26.657 01:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:26.657 01:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd81bded-2817-468d-8879-9b6223ce1734 00:40:26.657 01:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:26.915 01:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:26.915 01:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1cc38398-2077-44e0-ae2a-886bd3ae9819 00:40:27.172 01:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fd81bded-2817-468d-8879-9b6223ce1734 00:40:27.737 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:27.737 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:27.995 00:40:27.995 real 0m22.180s 00:40:27.995 user 0m39.251s 00:40:27.995 sys 0m4.928s 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:27.995 ************************************ 00:40:27.995 END TEST lvs_grow_dirty 00:40:27.995 ************************************ 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:40:27.995 nvmf_trace.0 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:27.995 rmmod nvme_tcp 00:40:27.995 rmmod nvme_fabrics 00:40:27.995 rmmod nvme_keyring 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3170331 ']' 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3170331 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3170331 ']' 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3170331 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3170331 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3170331' 00:40:27.995 killing process with pid 3170331 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3170331 00:40:27.995 01:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3170331 00:40:29.368 01:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:29.368 01:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:29.368 01:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:29.368 01:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:40:29.368 01:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:40:29.368 01:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:40:29.368 01:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:29.368 01:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:29.368 01:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:29.368 01:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:29.368 01:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:29.368 01:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:31.267 01:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:31.268 00:40:31.268 real 0m48.915s 00:40:31.268 user 1m1.929s 00:40:31.268 sys 0m8.952s 00:40:31.268 01:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:31.268 01:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:31.268 ************************************ 00:40:31.268 END TEST nvmf_lvs_grow 00:40:31.268 ************************************ 00:40:31.268 01:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:31.268 01:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:31.268 01:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:31.268 01:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:31.268 ************************************ 00:40:31.268 START TEST nvmf_bdev_io_wait 00:40:31.268 ************************************ 00:40:31.268 01:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:31.268 * Looking for test storage... 00:40:31.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:31.268 01:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:31.268 01:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:40:31.268 01:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:40:31.527 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:31.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:31.528 --rc genhtml_branch_coverage=1 00:40:31.528 --rc genhtml_function_coverage=1 00:40:31.528 --rc genhtml_legend=1 00:40:31.528 --rc geninfo_all_blocks=1 00:40:31.528 --rc geninfo_unexecuted_blocks=1 00:40:31.528 00:40:31.528 ' 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:31.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:31.528 --rc genhtml_branch_coverage=1 00:40:31.528 --rc genhtml_function_coverage=1 00:40:31.528 --rc genhtml_legend=1 00:40:31.528 --rc geninfo_all_blocks=1 00:40:31.528 --rc geninfo_unexecuted_blocks=1 00:40:31.528 00:40:31.528 ' 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:31.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:31.528 --rc genhtml_branch_coverage=1 00:40:31.528 --rc genhtml_function_coverage=1 00:40:31.528 --rc genhtml_legend=1 00:40:31.528 --rc geninfo_all_blocks=1 00:40:31.528 --rc geninfo_unexecuted_blocks=1 00:40:31.528 00:40:31.528 ' 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:31.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:31.528 --rc genhtml_branch_coverage=1 00:40:31.528 --rc genhtml_function_coverage=1 00:40:31.528 --rc genhtml_legend=1 00:40:31.528 --rc geninfo_all_blocks=1 00:40:31.528 --rc geninfo_unexecuted_blocks=1 00:40:31.528 00:40:31.528 ' 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:40:31.528 01:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:33.429 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:33.429 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:33.429 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:33.430 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:33.430 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:33.430 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:33.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:33.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:40:33.689 00:40:33.689 --- 10.0.0.2 ping statistics --- 00:40:33.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:33.689 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:33.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:33.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:40:33.689 00:40:33.689 --- 10.0.0.1 ping statistics --- 00:40:33.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:33.689 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3173141 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3173141 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3173141 ']' 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:33.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:33.689 01:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:33.689 [2024-12-06 01:18:06.348370] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:33.689 [2024-12-06 01:18:06.351025] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:40:33.689 [2024-12-06 01:18:06.351139] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:33.948 [2024-12-06 01:18:06.508078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:33.948 [2024-12-06 01:18:06.652797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:33.948 [2024-12-06 01:18:06.652901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:33.948 [2024-12-06 01:18:06.652940] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:33.948 [2024-12-06 01:18:06.652962] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:33.948 [2024-12-06 01:18:06.652984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:33.948 [2024-12-06 01:18:06.656028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:33.948 [2024-12-06 01:18:06.656095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:33.948 [2024-12-06 01:18:06.656139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:33.948 [2024-12-06 01:18:06.656150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:34.208 [2024-12-06 01:18:06.656953] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:34.774 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:34.775 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:40:34.775 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:34.775 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:34.775 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:34.775 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:34.775 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:40:34.775 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:34.775 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:34.775 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:34.775 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:40:34.775 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:34.775 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:35.034 [2024-12-06 01:18:07.560806] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:35.034 [2024-12-06 01:18:07.561932] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:35.034 [2024-12-06 01:18:07.563079] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:35.034 [2024-12-06 01:18:07.564120] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:35.034 [2024-12-06 01:18:07.569280] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:35.034 Malloc0 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:35.034 [2024-12-06 01:18:07.689531] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3173644 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3173646 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:35.034 { 00:40:35.034 "params": { 00:40:35.034 "name": "Nvme$subsystem", 00:40:35.034 "trtype": "$TEST_TRANSPORT", 00:40:35.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:35.034 "adrfam": "ipv4", 00:40:35.034 "trsvcid": "$NVMF_PORT", 00:40:35.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:35.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:35.034 "hdgst": ${hdgst:-false}, 00:40:35.034 "ddgst": ${ddgst:-false} 00:40:35.034 }, 00:40:35.034 "method": "bdev_nvme_attach_controller" 00:40:35.034 } 00:40:35.034 EOF 00:40:35.034 )") 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3173648 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:35.034 { 00:40:35.034 "params": { 00:40:35.034 "name": "Nvme$subsystem", 00:40:35.034 "trtype": "$TEST_TRANSPORT", 00:40:35.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:35.034 "adrfam": "ipv4", 00:40:35.034 "trsvcid": "$NVMF_PORT", 00:40:35.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:35.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:35.034 "hdgst": ${hdgst:-false}, 00:40:35.034 "ddgst": ${ddgst:-false} 00:40:35.034 }, 00:40:35.034 "method": "bdev_nvme_attach_controller" 00:40:35.034 } 00:40:35.034 EOF 00:40:35.034 )") 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3173651 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:35.034 { 00:40:35.034 "params": { 00:40:35.034 "name": "Nvme$subsystem", 00:40:35.034 "trtype": "$TEST_TRANSPORT", 00:40:35.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:35.034 "adrfam": "ipv4", 00:40:35.034 "trsvcid": "$NVMF_PORT", 00:40:35.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:35.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:35.034 "hdgst": ${hdgst:-false}, 00:40:35.034 "ddgst": ${ddgst:-false} 00:40:35.034 }, 00:40:35.034 "method": "bdev_nvme_attach_controller" 00:40:35.034 } 00:40:35.034 EOF 00:40:35.034 )") 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:35.034 { 00:40:35.034 "params": { 00:40:35.034 "name": "Nvme$subsystem", 00:40:35.034 "trtype": "$TEST_TRANSPORT", 00:40:35.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:35.034 "adrfam": "ipv4", 00:40:35.034 "trsvcid": "$NVMF_PORT", 00:40:35.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:35.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:35.034 "hdgst": ${hdgst:-false}, 00:40:35.034 "ddgst": ${ddgst:-false} 00:40:35.034 }, 00:40:35.034 "method": "bdev_nvme_attach_controller" 00:40:35.034 } 00:40:35.034 EOF 00:40:35.034 )") 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3173644 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:35.034 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:35.035 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:35.035 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:35.035 "params": { 00:40:35.035 "name": "Nvme1", 00:40:35.035 "trtype": "tcp", 00:40:35.035 "traddr": "10.0.0.2", 00:40:35.035 "adrfam": "ipv4", 00:40:35.035 "trsvcid": "4420", 00:40:35.035 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:35.035 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:35.035 "hdgst": false, 00:40:35.035 "ddgst": false 00:40:35.035 }, 00:40:35.035 "method": "bdev_nvme_attach_controller" 00:40:35.035 }' 00:40:35.035 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:35.035 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:35.035 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:35.035 "params": { 00:40:35.035 "name": "Nvme1", 00:40:35.035 "trtype": "tcp", 00:40:35.035 "traddr": "10.0.0.2", 00:40:35.035 "adrfam": "ipv4", 00:40:35.035 "trsvcid": "4420", 00:40:35.035 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:35.035 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:35.035 "hdgst": false, 00:40:35.035 "ddgst": false 00:40:35.035 }, 00:40:35.035 "method": "bdev_nvme_attach_controller" 00:40:35.035 }' 00:40:35.035 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:35.035 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:35.035 "params": { 00:40:35.035 "name": "Nvme1", 00:40:35.035 "trtype": "tcp", 00:40:35.035 "traddr": "10.0.0.2", 00:40:35.035 "adrfam": "ipv4", 00:40:35.035 "trsvcid": "4420", 00:40:35.035 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:35.035 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:35.035 "hdgst": false, 00:40:35.035 "ddgst": false 00:40:35.035 }, 00:40:35.035 "method": "bdev_nvme_attach_controller" 00:40:35.035 }' 00:40:35.035 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:35.035 01:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:35.035 "params": { 00:40:35.035 "name": "Nvme1", 00:40:35.035 "trtype": "tcp", 00:40:35.035 "traddr": "10.0.0.2", 00:40:35.035 "adrfam": "ipv4", 00:40:35.035 "trsvcid": "4420", 00:40:35.035 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:35.035 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:35.035 "hdgst": false, 00:40:35.035 "ddgst": false 00:40:35.035 }, 00:40:35.035 "method": "bdev_nvme_attach_controller" 00:40:35.035 }' 00:40:35.294 [2024-12-06 01:18:07.777984] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:40:35.294 [2024-12-06 01:18:07.777987] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:40:35.294 [2024-12-06 01:18:07.777985] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:40:35.294 [2024-12-06 01:18:07.778142] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-06 01:18:07.778142] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-06 01:18:07.778145] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:40:35.294 --proc-type=auto ] 00:40:35.294 --proc-type=auto ] 00:40:35.294 [2024-12-06 01:18:07.778558] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:40:35.294 [2024-12-06 01:18:07.778693] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:40:35.552 [2024-12-06 01:18:08.036334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:35.552 [2024-12-06 01:18:08.107275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:35.552 [2024-12-06 01:18:08.159758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:35.552 [2024-12-06 01:18:08.211661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:35.552 [2024-12-06 01:18:08.225582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:40:35.811 [2024-12-06 01:18:08.321717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:35.811 [2024-12-06 01:18:08.333488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:35.811 [2024-12-06 01:18:08.443171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:36.070 Running I/O for 1 seconds... 00:40:36.070 Running I/O for 1 seconds... 00:40:36.070 Running I/O for 1 seconds... 00:40:36.328 Running I/O for 1 seconds... 00:40:36.894 147520.00 IOPS, 576.25 MiB/s 00:40:36.894 Latency(us) 00:40:36.894 [2024-12-06T00:18:09.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:36.894 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:40:36.894 Nvme1n1 : 1.00 147213.26 575.05 0.00 0.00 864.95 380.78 2038.90 00:40:36.894 [2024-12-06T00:18:09.603Z] =================================================================================================================== 00:40:36.894 [2024-12-06T00:18:09.603Z] Total : 147213.26 575.05 0.00 0.00 864.95 380.78 2038.90 00:40:37.153 9011.00 IOPS, 35.20 MiB/s 00:40:37.153 Latency(us) 00:40:37.153 [2024-12-06T00:18:09.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:37.153 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:40:37.153 Nvme1n1 : 1.01 9079.52 35.47 0.00 0.00 14038.15 6602.15 18932.62 00:40:37.153 [2024-12-06T00:18:09.862Z] =================================================================================================================== 00:40:37.153 [2024-12-06T00:18:09.862Z] Total : 9079.52 35.47 0.00 0.00 14038.15 6602.15 18932.62 00:40:37.153 7281.00 IOPS, 28.44 MiB/s 00:40:37.153 Latency(us) 00:40:37.153 [2024-12-06T00:18:09.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:37.153 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:40:37.153 Nvme1n1 : 1.05 7036.76 27.49 0.00 0.00 17375.69 6043.88 52040.44 00:40:37.153 [2024-12-06T00:18:09.862Z] =================================================================================================================== 00:40:37.153 [2024-12-06T00:18:09.862Z] Total : 7036.76 27.49 0.00 0.00 17375.69 6043.88 52040.44 00:40:37.411 6760.00 IOPS, 26.41 MiB/s 00:40:37.411 Latency(us) 00:40:37.412 [2024-12-06T00:18:10.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:37.412 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:40:37.412 Nvme1n1 : 1.01 6833.77 26.69 0.00 0.00 18635.29 2524.35 28350.39 00:40:37.412 [2024-12-06T00:18:10.121Z] =================================================================================================================== 00:40:37.412 [2024-12-06T00:18:10.121Z] Total : 6833.77 26.69 0.00 0.00 18635.29 2524.35 28350.39 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3173646 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3173648 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3173651 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:37.979 rmmod nvme_tcp 00:40:37.979 rmmod nvme_fabrics 00:40:37.979 rmmod nvme_keyring 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3173141 ']' 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3173141 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3173141 ']' 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3173141 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3173141 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3173141' 00:40:37.979 killing process with pid 3173141 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3173141 00:40:37.979 01:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3173141 00:40:39.355 01:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:39.355 01:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:39.355 01:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:39.355 01:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:40:39.355 01:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:40:39.355 01:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:39.355 01:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:40:39.355 01:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:39.355 01:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:39.355 01:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:39.355 01:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:39.355 01:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:41.260 00:40:41.260 real 0m9.884s 00:40:41.260 user 0m22.071s 00:40:41.260 sys 0m4.839s 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:41.260 ************************************ 00:40:41.260 END TEST nvmf_bdev_io_wait 00:40:41.260 ************************************ 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:41.260 ************************************ 00:40:41.260 START TEST nvmf_queue_depth 00:40:41.260 ************************************ 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:41.260 * Looking for test storage... 00:40:41.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:41.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.260 --rc genhtml_branch_coverage=1 00:40:41.260 --rc genhtml_function_coverage=1 00:40:41.260 --rc genhtml_legend=1 00:40:41.260 --rc geninfo_all_blocks=1 00:40:41.260 --rc geninfo_unexecuted_blocks=1 00:40:41.260 00:40:41.260 ' 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:41.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.260 --rc genhtml_branch_coverage=1 00:40:41.260 --rc genhtml_function_coverage=1 00:40:41.260 --rc genhtml_legend=1 00:40:41.260 --rc geninfo_all_blocks=1 00:40:41.260 --rc geninfo_unexecuted_blocks=1 00:40:41.260 00:40:41.260 ' 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:41.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.260 --rc genhtml_branch_coverage=1 00:40:41.260 --rc genhtml_function_coverage=1 00:40:41.260 --rc genhtml_legend=1 00:40:41.260 --rc geninfo_all_blocks=1 00:40:41.260 --rc geninfo_unexecuted_blocks=1 00:40:41.260 00:40:41.260 ' 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:41.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.260 --rc genhtml_branch_coverage=1 00:40:41.260 --rc genhtml_function_coverage=1 00:40:41.260 --rc genhtml_legend=1 00:40:41.260 --rc geninfo_all_blocks=1 00:40:41.260 --rc geninfo_unexecuted_blocks=1 00:40:41.260 00:40:41.260 ' 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:41.260 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:41.261 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:41.519 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:41.519 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:41.519 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:40:41.519 01:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:43.418 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:43.418 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:40:43.418 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:43.418 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:43.418 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:43.418 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:43.418 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:43.418 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:40:43.418 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:43.418 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:40:43.418 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:40:43.418 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:40:43.418 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:40:43.418 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:40:43.418 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:40:43.418 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:43.418 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:43.418 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:43.418 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:43.418 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:43.419 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:43.419 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:43.419 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:43.419 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:43.419 01:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:43.419 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:43.419 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:43.419 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:43.419 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:43.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:43.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:40:43.419 00:40:43.419 --- 10.0.0.2 ping statistics --- 00:40:43.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:43.419 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:40:43.419 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:43.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:43.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:40:43.419 00:40:43.419 --- 10.0.0.1 ping statistics --- 00:40:43.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:43.419 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:40:43.419 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:43.419 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:40:43.419 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:43.419 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:43.419 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:43.419 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:43.419 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:43.419 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:43.419 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:43.419 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:40:43.419 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:43.419 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:43.420 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:43.420 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3176280 00:40:43.420 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:43.420 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3176280 00:40:43.420 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3176280 ']' 00:40:43.420 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:43.420 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:43.420 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:43.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:43.420 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:43.420 01:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:43.678 [2024-12-06 01:18:16.143050] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:43.678 [2024-12-06 01:18:16.145737] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:40:43.678 [2024-12-06 01:18:16.145875] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:43.678 [2024-12-06 01:18:16.295201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:43.937 [2024-12-06 01:18:16.428355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:43.937 [2024-12-06 01:18:16.428444] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:43.937 [2024-12-06 01:18:16.428472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:43.937 [2024-12-06 01:18:16.428494] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:43.937 [2024-12-06 01:18:16.428516] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:43.937 [2024-12-06 01:18:16.430214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:44.196 [2024-12-06 01:18:16.802416] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:44.196 [2024-12-06 01:18:16.802808] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:44.454 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:44.454 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:40:44.454 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:44.454 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:44.454 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:44.454 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:44.454 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:44.454 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.454 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:44.454 [2024-12-06 01:18:17.123272] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:44.454 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.454 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:44.454 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.454 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:44.714 Malloc0 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:44.714 [2024-12-06 01:18:17.243472] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3176433 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3176433 /var/tmp/bdevperf.sock 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3176433 ']' 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:44.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:44.714 01:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:44.714 [2024-12-06 01:18:17.338503] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:40:44.714 [2024-12-06 01:18:17.338658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3176433 ] 00:40:44.973 [2024-12-06 01:18:17.493949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:44.973 [2024-12-06 01:18:17.626363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:45.909 01:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:45.909 01:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:40:45.909 01:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:45.909 01:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.909 01:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:45.909 NVMe0n1 00:40:45.909 01:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.909 01:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:45.909 Running I/O for 10 seconds... 00:40:47.781 6144.00 IOPS, 24.00 MiB/s [2024-12-06T00:18:21.864Z] 6144.00 IOPS, 24.00 MiB/s [2024-12-06T00:18:22.798Z] 6141.00 IOPS, 23.99 MiB/s [2024-12-06T00:18:23.733Z] 6137.50 IOPS, 23.97 MiB/s [2024-12-06T00:18:24.668Z] 6134.80 IOPS, 23.96 MiB/s [2024-12-06T00:18:25.657Z] 6141.00 IOPS, 23.99 MiB/s [2024-12-06T00:18:26.592Z] 6144.71 IOPS, 24.00 MiB/s [2024-12-06T00:18:27.527Z] 6145.50 IOPS, 24.01 MiB/s [2024-12-06T00:18:28.904Z] 6144.78 IOPS, 24.00 MiB/s [2024-12-06T00:18:28.904Z] 6144.70 IOPS, 24.00 MiB/s 00:40:56.195 Latency(us) 00:40:56.195 [2024-12-06T00:18:28.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:56.195 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:40:56.195 Verification LBA range: start 0x0 length 0x4000 00:40:56.195 NVMe0n1 : 10.12 6175.74 24.12 0.00 0.00 164971.62 23787.14 100973.99 00:40:56.195 [2024-12-06T00:18:28.904Z] =================================================================================================================== 00:40:56.195 [2024-12-06T00:18:28.904Z] Total : 6175.74 24.12 0.00 0.00 164971.62 23787.14 100973.99 00:40:56.195 { 00:40:56.195 "results": [ 00:40:56.195 { 00:40:56.195 "job": "NVMe0n1", 00:40:56.195 "core_mask": "0x1", 00:40:56.195 "workload": "verify", 00:40:56.195 "status": "finished", 00:40:56.195 "verify_range": { 00:40:56.195 "start": 0, 00:40:56.195 "length": 16384 00:40:56.195 }, 00:40:56.195 "queue_depth": 1024, 00:40:56.195 "io_size": 4096, 00:40:56.195 "runtime": 10.115544, 00:40:56.195 "iops": 6175.7429951369895, 00:40:56.195 "mibps": 24.123996074753865, 00:40:56.195 "io_failed": 0, 00:40:56.195 "io_timeout": 0, 00:40:56.195 "avg_latency_us": 164971.62011932055, 00:40:56.195 "min_latency_us": 23787.140740740742, 00:40:56.195 "max_latency_us": 100973.98518518519 00:40:56.195 } 00:40:56.195 ], 00:40:56.196 "core_count": 1 00:40:56.196 } 00:40:56.196 01:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3176433 00:40:56.196 01:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3176433 ']' 00:40:56.196 01:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3176433 00:40:56.196 01:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:40:56.196 01:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:56.196 01:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3176433 00:40:56.196 01:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:56.196 01:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:56.196 01:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3176433' 00:40:56.196 killing process with pid 3176433 00:40:56.196 01:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3176433 00:40:56.196 Received shutdown signal, test time was about 10.000000 seconds 00:40:56.196 00:40:56.196 Latency(us) 00:40:56.196 [2024-12-06T00:18:28.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:56.196 [2024-12-06T00:18:28.905Z] =================================================================================================================== 00:40:56.196 [2024-12-06T00:18:28.905Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:56.196 01:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3176433 00:40:56.763 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:40:56.763 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:40:56.763 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:56.763 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:40:56.763 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:56.763 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:40:56.763 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:56.763 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:56.763 rmmod nvme_tcp 00:40:57.021 rmmod nvme_fabrics 00:40:57.021 rmmod nvme_keyring 00:40:57.021 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:57.021 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:40:57.021 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:40:57.021 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3176280 ']' 00:40:57.021 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3176280 00:40:57.021 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3176280 ']' 00:40:57.021 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3176280 00:40:57.021 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:40:57.021 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:57.021 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3176280 00:40:57.021 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:57.021 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:57.021 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3176280' 00:40:57.021 killing process with pid 3176280 00:40:57.021 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3176280 00:40:57.021 01:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3176280 00:40:58.399 01:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:58.399 01:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:58.399 01:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:58.399 01:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:40:58.399 01:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:40:58.399 01:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:58.399 01:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:40:58.399 01:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:58.399 01:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:58.399 01:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:58.399 01:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:58.399 01:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:00.299 01:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:00.299 00:41:00.299 real 0m19.172s 00:41:00.299 user 0m26.524s 00:41:00.299 sys 0m3.664s 00:41:00.299 01:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:00.299 01:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:00.299 ************************************ 00:41:00.299 END TEST nvmf_queue_depth 00:41:00.299 ************************************ 00:41:00.299 01:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:00.299 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:00.299 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:00.299 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:00.557 ************************************ 00:41:00.557 START TEST nvmf_target_multipath 00:41:00.557 ************************************ 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:00.557 * Looking for test storage... 00:41:00.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:00.557 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:00.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.557 --rc genhtml_branch_coverage=1 00:41:00.557 --rc genhtml_function_coverage=1 00:41:00.557 --rc genhtml_legend=1 00:41:00.557 --rc geninfo_all_blocks=1 00:41:00.557 --rc geninfo_unexecuted_blocks=1 00:41:00.557 00:41:00.557 ' 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:00.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.558 --rc genhtml_branch_coverage=1 00:41:00.558 --rc genhtml_function_coverage=1 00:41:00.558 --rc genhtml_legend=1 00:41:00.558 --rc geninfo_all_blocks=1 00:41:00.558 --rc geninfo_unexecuted_blocks=1 00:41:00.558 00:41:00.558 ' 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:00.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.558 --rc genhtml_branch_coverage=1 00:41:00.558 --rc genhtml_function_coverage=1 00:41:00.558 --rc genhtml_legend=1 00:41:00.558 --rc geninfo_all_blocks=1 00:41:00.558 --rc geninfo_unexecuted_blocks=1 00:41:00.558 00:41:00.558 ' 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:00.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.558 --rc genhtml_branch_coverage=1 00:41:00.558 --rc genhtml_function_coverage=1 00:41:00.558 --rc genhtml_legend=1 00:41:00.558 --rc geninfo_all_blocks=1 00:41:00.558 --rc geninfo_unexecuted_blocks=1 00:41:00.558 00:41:00.558 ' 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:41:00.558 01:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:02.457 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:02.457 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:02.457 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:02.458 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:02.458 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:02.458 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:02.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:02.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:41:02.716 00:41:02.716 --- 10.0.0.2 ping statistics --- 00:41:02.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:02.716 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:02.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:02.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:41:02.716 00:41:02.716 --- 10.0.0.1 ping statistics --- 00:41:02.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:02.716 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:41:02.716 only one NIC for nvmf test 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:02.716 rmmod nvme_tcp 00:41:02.716 rmmod nvme_fabrics 00:41:02.716 rmmod nvme_keyring 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:02.716 01:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:05.251 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:05.251 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:41:05.251 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:41:05.251 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:05.251 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:41:05.251 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:05.251 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:41:05.251 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:05.251 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:05.252 00:41:05.252 real 0m4.360s 00:41:05.252 user 0m0.845s 00:41:05.252 sys 0m1.497s 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:05.252 ************************************ 00:41:05.252 END TEST nvmf_target_multipath 00:41:05.252 ************************************ 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:05.252 ************************************ 00:41:05.252 START TEST nvmf_zcopy 00:41:05.252 ************************************ 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:41:05.252 * Looking for test storage... 00:41:05.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:05.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:05.252 --rc genhtml_branch_coverage=1 00:41:05.252 --rc genhtml_function_coverage=1 00:41:05.252 --rc genhtml_legend=1 00:41:05.252 --rc geninfo_all_blocks=1 00:41:05.252 --rc geninfo_unexecuted_blocks=1 00:41:05.252 00:41:05.252 ' 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:05.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:05.252 --rc genhtml_branch_coverage=1 00:41:05.252 --rc genhtml_function_coverage=1 00:41:05.252 --rc genhtml_legend=1 00:41:05.252 --rc geninfo_all_blocks=1 00:41:05.252 --rc geninfo_unexecuted_blocks=1 00:41:05.252 00:41:05.252 ' 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:05.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:05.252 --rc genhtml_branch_coverage=1 00:41:05.252 --rc genhtml_function_coverage=1 00:41:05.252 --rc genhtml_legend=1 00:41:05.252 --rc geninfo_all_blocks=1 00:41:05.252 --rc geninfo_unexecuted_blocks=1 00:41:05.252 00:41:05.252 ' 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:05.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:05.252 --rc genhtml_branch_coverage=1 00:41:05.252 --rc genhtml_function_coverage=1 00:41:05.252 --rc genhtml_legend=1 00:41:05.252 --rc geninfo_all_blocks=1 00:41:05.252 --rc geninfo_unexecuted_blocks=1 00:41:05.252 00:41:05.252 ' 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:05.252 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:41:05.253 01:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:07.160 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:07.161 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:07.161 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:07.161 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:07.161 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:07.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:07.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:41:07.161 00:41:07.161 --- 10.0.0.2 ping statistics --- 00:41:07.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:07.161 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:07.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:07.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:41:07.161 00:41:07.161 --- 10.0.0.1 ping statistics --- 00:41:07.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:07.161 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:07.161 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:07.421 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3181865 00:41:07.421 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:41:07.421 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3181865 00:41:07.421 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3181865 ']' 00:41:07.421 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:07.421 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:07.421 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:07.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:07.421 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:07.421 01:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:07.421 [2024-12-06 01:18:39.957645] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:07.421 [2024-12-06 01:18:39.960189] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:41:07.421 [2024-12-06 01:18:39.960287] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:07.421 [2024-12-06 01:18:40.118177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:07.681 [2024-12-06 01:18:40.255542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:07.681 [2024-12-06 01:18:40.255644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:07.681 [2024-12-06 01:18:40.255674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:07.681 [2024-12-06 01:18:40.255698] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:07.681 [2024-12-06 01:18:40.255721] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:07.681 [2024-12-06 01:18:40.257412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:07.940 [2024-12-06 01:18:40.638185] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:07.940 [2024-12-06 01:18:40.638619] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:08.508 01:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:08.508 01:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:41:08.508 01:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:08.508 01:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:08.508 01:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:08.508 01:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:08.508 01:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:41:08.508 01:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:41:08.508 01:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.508 01:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:08.508 [2024-12-06 01:18:40.990529] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:08.508 01:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.508 01:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:41:08.508 01:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.508 01:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:08.508 [2024-12-06 01:18:41.006756] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:08.508 malloc0 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:08.508 { 00:41:08.508 "params": { 00:41:08.508 "name": "Nvme$subsystem", 00:41:08.508 "trtype": "$TEST_TRANSPORT", 00:41:08.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:08.508 "adrfam": "ipv4", 00:41:08.508 "trsvcid": "$NVMF_PORT", 00:41:08.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:08.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:08.508 "hdgst": ${hdgst:-false}, 00:41:08.508 "ddgst": ${ddgst:-false} 00:41:08.508 }, 00:41:08.508 "method": "bdev_nvme_attach_controller" 00:41:08.508 } 00:41:08.508 EOF 00:41:08.508 )") 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:41:08.508 01:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:08.508 "params": { 00:41:08.508 "name": "Nvme1", 00:41:08.508 "trtype": "tcp", 00:41:08.508 "traddr": "10.0.0.2", 00:41:08.508 "adrfam": "ipv4", 00:41:08.508 "trsvcid": "4420", 00:41:08.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:08.508 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:08.508 "hdgst": false, 00:41:08.508 "ddgst": false 00:41:08.508 }, 00:41:08.508 "method": "bdev_nvme_attach_controller" 00:41:08.508 }' 00:41:08.508 [2024-12-06 01:18:41.167470] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:41:08.508 [2024-12-06 01:18:41.167602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3182018 ] 00:41:08.767 [2024-12-06 01:18:41.309763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:08.767 [2024-12-06 01:18:41.444586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:09.335 Running I/O for 10 seconds... 00:41:11.655 4030.00 IOPS, 31.48 MiB/s [2024-12-06T00:18:45.302Z] 4144.00 IOPS, 32.38 MiB/s [2024-12-06T00:18:46.238Z] 4187.67 IOPS, 32.72 MiB/s [2024-12-06T00:18:47.176Z] 4178.75 IOPS, 32.65 MiB/s [2024-12-06T00:18:48.115Z] 4209.40 IOPS, 32.89 MiB/s [2024-12-06T00:18:49.051Z] 4198.17 IOPS, 32.80 MiB/s [2024-12-06T00:18:49.986Z] 4184.00 IOPS, 32.69 MiB/s [2024-12-06T00:18:51.365Z] 4177.50 IOPS, 32.64 MiB/s [2024-12-06T00:18:52.295Z] 4166.44 IOPS, 32.55 MiB/s [2024-12-06T00:18:52.295Z] 4158.00 IOPS, 32.48 MiB/s 00:41:19.586 Latency(us) 00:41:19.586 [2024-12-06T00:18:52.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:19.586 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:41:19.586 Verification LBA range: start 0x0 length 0x1000 00:41:19.586 Nvme1n1 : 10.07 4142.81 32.37 0.00 0.00 30704.17 5509.88 47380.10 00:41:19.586 [2024-12-06T00:18:52.295Z] =================================================================================================================== 00:41:19.586 [2024-12-06T00:18:52.295Z] Total : 4142.81 32.37 0.00 0.00 30704.17 5509.88 47380.10 00:41:20.523 01:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3183326 00:41:20.523 01:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:41:20.523 01:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:20.523 01:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:41:20.523 01:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:41:20.523 01:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:41:20.523 01:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:41:20.523 01:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:20.523 01:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:20.523 { 00:41:20.523 "params": { 00:41:20.523 "name": "Nvme$subsystem", 00:41:20.523 "trtype": "$TEST_TRANSPORT", 00:41:20.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:20.523 "adrfam": "ipv4", 00:41:20.523 "trsvcid": "$NVMF_PORT", 00:41:20.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:20.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:20.523 "hdgst": ${hdgst:-false}, 00:41:20.523 "ddgst": ${ddgst:-false} 00:41:20.523 }, 00:41:20.523 "method": "bdev_nvme_attach_controller" 00:41:20.523 } 00:41:20.523 EOF 00:41:20.523 )") 00:41:20.523 01:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:41:20.523 [2024-12-06 01:18:52.966460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:52.966522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 01:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:41:20.523 01:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:41:20.523 01:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:20.523 "params": { 00:41:20.523 "name": "Nvme1", 00:41:20.523 "trtype": "tcp", 00:41:20.523 "traddr": "10.0.0.2", 00:41:20.523 "adrfam": "ipv4", 00:41:20.523 "trsvcid": "4420", 00:41:20.523 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:20.523 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:20.523 "hdgst": false, 00:41:20.523 "ddgst": false 00:41:20.523 }, 00:41:20.523 "method": "bdev_nvme_attach_controller" 00:41:20.523 }' 00:41:20.523 [2024-12-06 01:18:52.974354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:52.974400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:52.982322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:52.982359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:52.990358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:52.990394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:52.998382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:52.998421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.006352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.006393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.014362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.014398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.022337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.022380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.030331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.030366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.038319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.038348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.046317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.046352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.051465] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:41:20.523 [2024-12-06 01:18:53.051590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3183326 ] 00:41:20.523 [2024-12-06 01:18:53.054342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.054377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.062355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.062390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.070324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.070357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.078361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.078396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.086347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.086385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.094358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.094394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.102337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.102368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.110294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.110334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.118307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.118335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.126320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.126350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.134309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.134336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.142323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.142352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.150319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.150348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.158303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.158331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.166320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.166348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.174293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.174321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.182310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.182338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.190335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.190363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.198285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.198312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.203003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:20.523 [2024-12-06 01:18:53.206324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.206353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.214328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.214357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.222392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.222450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-12-06 01:18:53.230373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-12-06 01:18:53.230417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.238283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.238312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.246316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.246344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.254321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.254348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.262284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.262312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.270331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.270360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.278322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.278350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.286320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.286348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.294319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.294347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.302297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.302325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.310309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.310337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.318304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.318333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.326299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.326327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.330458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:20.783 [2024-12-06 01:18:53.334327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.334356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.342321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.342351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.350380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.350430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.358409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.358456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.366291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.366320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.374320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.374349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.382337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.382365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.390297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.390324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.398319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.398348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.406318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.406355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.414366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.414417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.422429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.422482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.430391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.430444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.438437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.438488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.446323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.446355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.783 [2024-12-06 01:18:53.454297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.783 [2024-12-06 01:18:53.454326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-06 01:18:53.462309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-06 01:18:53.462348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-06 01:18:53.470309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-06 01:18:53.470379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-06 01:18:53.478320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-06 01:18:53.478348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.784 [2024-12-06 01:18:53.486322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.784 [2024-12-06 01:18:53.486351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.494305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.494334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.502314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.502344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.510326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.510354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.518283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.518310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.526320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.526348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.534317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.534345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.542292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.542319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.550319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.550348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.558364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.558424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.566439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.566492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.574444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.574500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.582328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.582370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.590324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.590353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.598305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.598333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.606302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.606329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.614325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.614353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.622308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.622335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.630304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.630331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.638315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.638343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.646291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.646318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.654304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.654332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.662319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.662347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.670323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.670353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.678338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.678371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.686296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.686326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.694312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.694343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.702311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.702342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.710296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.710334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.718341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.718371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.726310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.726338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.734316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.734344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.742306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.742334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.750293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.750322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.065 [2024-12-06 01:18:53.758348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.065 [2024-12-06 01:18:53.758379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.766325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.766356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.774305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.774334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.782330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.782360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.790307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.790335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.798284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.798312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.806308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.806336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.814315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.814345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.822321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.822350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.830319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.830348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.838319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.838346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.846338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.846366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.854351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.854399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.862310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.862346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.870312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.870342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.878307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.878349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.886312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.886355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.894316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.894345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.902285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.902313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.910315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.910347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.918338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.918369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 Running I/O for 5 seconds... 00:41:21.331 [2024-12-06 01:18:53.933485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.933541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.946062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.946114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.962504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.962540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.977906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.977943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:53.991558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:53.991593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:54.004920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:54.004958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:54.019983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:54.020020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.331 [2024-12-06 01:18:54.033772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.331 [2024-12-06 01:18:54.033807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.590 [2024-12-06 01:18:54.047667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.590 [2024-12-06 01:18:54.047702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.590 [2024-12-06 01:18:54.061668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.590 [2024-12-06 01:18:54.061701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.590 [2024-12-06 01:18:54.076841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.590 [2024-12-06 01:18:54.076892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.590 [2024-12-06 01:18:54.091120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.590 [2024-12-06 01:18:54.091154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.590 [2024-12-06 01:18:54.110316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.590 [2024-12-06 01:18:54.110352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.590 [2024-12-06 01:18:54.122788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.590 [2024-12-06 01:18:54.122851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.590 [2024-12-06 01:18:54.142163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.590 [2024-12-06 01:18:54.142198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.590 [2024-12-06 01:18:54.153758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.590 [2024-12-06 01:18:54.153791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.590 [2024-12-06 01:18:54.170059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.590 [2024-12-06 01:18:54.170110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.590 [2024-12-06 01:18:54.184934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.590 [2024-12-06 01:18:54.184970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.590 [2024-12-06 01:18:54.199131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.590 [2024-12-06 01:18:54.199181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.590 [2024-12-06 01:18:54.216709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.590 [2024-12-06 01:18:54.216762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.590 [2024-12-06 01:18:54.229518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.590 [2024-12-06 01:18:54.229569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.590 [2024-12-06 01:18:54.245201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.590 [2024-12-06 01:18:54.245234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.590 [2024-12-06 01:18:54.259832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.590 [2024-12-06 01:18:54.259881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.590 [2024-12-06 01:18:54.274603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.590 [2024-12-06 01:18:54.274637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.590 [2024-12-06 01:18:54.289412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.590 [2024-12-06 01:18:54.289446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.849 [2024-12-06 01:18:54.303717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.849 [2024-12-06 01:18:54.303752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.849 [2024-12-06 01:18:54.317746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.849 [2024-12-06 01:18:54.317779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.849 [2024-12-06 01:18:54.332343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.849 [2024-12-06 01:18:54.332377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.849 [2024-12-06 01:18:54.346287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.849 [2024-12-06 01:18:54.346320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.849 [2024-12-06 01:18:54.361570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.849 [2024-12-06 01:18:54.361603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.849 [2024-12-06 01:18:54.375919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.849 [2024-12-06 01:18:54.375955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.849 [2024-12-06 01:18:54.390017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.849 [2024-12-06 01:18:54.390052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.849 [2024-12-06 01:18:54.404207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.849 [2024-12-06 01:18:54.404240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.849 [2024-12-06 01:18:54.418247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.849 [2024-12-06 01:18:54.418281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.849 [2024-12-06 01:18:54.433300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.849 [2024-12-06 01:18:54.433334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.849 [2024-12-06 01:18:54.447410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.849 [2024-12-06 01:18:54.447444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.849 [2024-12-06 01:18:54.462196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.849 [2024-12-06 01:18:54.462244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.849 [2024-12-06 01:18:54.476428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.849 [2024-12-06 01:18:54.476463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.849 [2024-12-06 01:18:54.491006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.849 [2024-12-06 01:18:54.491041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.849 [2024-12-06 01:18:54.509376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.849 [2024-12-06 01:18:54.509412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.849 [2024-12-06 01:18:54.520977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.849 [2024-12-06 01:18:54.521015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.849 [2024-12-06 01:18:54.536496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.849 [2024-12-06 01:18:54.536529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.849 [2024-12-06 01:18:54.550697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.849 [2024-12-06 01:18:54.550732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.108 [2024-12-06 01:18:54.565378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.108 [2024-12-06 01:18:54.565412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.108 [2024-12-06 01:18:54.579856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.108 [2024-12-06 01:18:54.579892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.108 [2024-12-06 01:18:54.595360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.108 [2024-12-06 01:18:54.595394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.108 [2024-12-06 01:18:54.609970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.108 [2024-12-06 01:18:54.610007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.108 [2024-12-06 01:18:54.624357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.108 [2024-12-06 01:18:54.624391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.108 [2024-12-06 01:18:54.638294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.108 [2024-12-06 01:18:54.638338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.108 [2024-12-06 01:18:54.652331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.108 [2024-12-06 01:18:54.652365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.108 [2024-12-06 01:18:54.665868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.108 [2024-12-06 01:18:54.665913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.108 [2024-12-06 01:18:54.680852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.108 [2024-12-06 01:18:54.680899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.108 [2024-12-06 01:18:54.695790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.108 [2024-12-06 01:18:54.695860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.108 [2024-12-06 01:18:54.711040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.108 [2024-12-06 01:18:54.711073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.108 [2024-12-06 01:18:54.726184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.108 [2024-12-06 01:18:54.726224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.108 [2024-12-06 01:18:54.741398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.108 [2024-12-06 01:18:54.741438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.108 [2024-12-06 01:18:54.755599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.108 [2024-12-06 01:18:54.755638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.108 [2024-12-06 01:18:54.768989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.108 [2024-12-06 01:18:54.769023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.108 [2024-12-06 01:18:54.785974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.108 [2024-12-06 01:18:54.786008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.108 [2024-12-06 01:18:54.801780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.108 [2024-12-06 01:18:54.801830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.367 [2024-12-06 01:18:54.816956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.367 [2024-12-06 01:18:54.816992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.367 [2024-12-06 01:18:54.832435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.367 [2024-12-06 01:18:54.832474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.367 [2024-12-06 01:18:54.849296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.367 [2024-12-06 01:18:54.849336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.367 [2024-12-06 01:18:54.864945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.367 [2024-12-06 01:18:54.865002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.367 [2024-12-06 01:18:54.880309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.367 [2024-12-06 01:18:54.880347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.367 [2024-12-06 01:18:54.895629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.367 [2024-12-06 01:18:54.895668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.367 [2024-12-06 01:18:54.909625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.367 [2024-12-06 01:18:54.909665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.367 [2024-12-06 01:18:54.926143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.367 [2024-12-06 01:18:54.926199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.367 8667.00 IOPS, 67.71 MiB/s [2024-12-06T00:18:55.076Z] [2024-12-06 01:18:54.941320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.367 [2024-12-06 01:18:54.941360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.367 [2024-12-06 01:18:54.957506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.367 [2024-12-06 01:18:54.957546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.367 [2024-12-06 01:18:54.972832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.367 [2024-12-06 01:18:54.972883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.367 [2024-12-06 01:18:54.987063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.367 [2024-12-06 01:18:54.987110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.367 [2024-12-06 01:18:55.005758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.367 [2024-12-06 01:18:55.005798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.367 [2024-12-06 01:18:55.019300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.367 [2024-12-06 01:18:55.019339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.367 [2024-12-06 01:18:55.040057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.367 [2024-12-06 01:18:55.040107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.367 [2024-12-06 01:18:55.054293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.367 [2024-12-06 01:18:55.054333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.367 [2024-12-06 01:18:55.070672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.367 [2024-12-06 01:18:55.070713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.625 [2024-12-06 01:18:55.085327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.625 [2024-12-06 01:18:55.085367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.625 [2024-12-06 01:18:55.100247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.625 [2024-12-06 01:18:55.100287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.625 [2024-12-06 01:18:55.115587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.625 [2024-12-06 01:18:55.115627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.625 [2024-12-06 01:18:55.130479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.625 [2024-12-06 01:18:55.130518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.625 [2024-12-06 01:18:55.146189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.625 [2024-12-06 01:18:55.146229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.625 [2024-12-06 01:18:55.160583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.625 [2024-12-06 01:18:55.160623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.625 [2024-12-06 01:18:55.176391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.625 [2024-12-06 01:18:55.176431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.625 [2024-12-06 01:18:55.192487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.625 [2024-12-06 01:18:55.192527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.625 [2024-12-06 01:18:55.208230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.625 [2024-12-06 01:18:55.208269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.625 [2024-12-06 01:18:55.224472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.625 [2024-12-06 01:18:55.224525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.625 [2024-12-06 01:18:55.239448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.625 [2024-12-06 01:18:55.239488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.625 [2024-12-06 01:18:55.254480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.625 [2024-12-06 01:18:55.254520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.625 [2024-12-06 01:18:55.269634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.625 [2024-12-06 01:18:55.269674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.625 [2024-12-06 01:18:55.284769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.625 [2024-12-06 01:18:55.284810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.625 [2024-12-06 01:18:55.299726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.625 [2024-12-06 01:18:55.299765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.625 [2024-12-06 01:18:55.314649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.625 [2024-12-06 01:18:55.314688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.626 [2024-12-06 01:18:55.330287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.626 [2024-12-06 01:18:55.330328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.884 [2024-12-06 01:18:55.346347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.884 [2024-12-06 01:18:55.346387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.884 [2024-12-06 01:18:55.361750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.884 [2024-12-06 01:18:55.361790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.884 [2024-12-06 01:18:55.376588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.884 [2024-12-06 01:18:55.376627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.884 [2024-12-06 01:18:55.391828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.884 [2024-12-06 01:18:55.391881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.884 [2024-12-06 01:18:55.406964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.884 [2024-12-06 01:18:55.406997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.884 [2024-12-06 01:18:55.423214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.884 [2024-12-06 01:18:55.423254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.884 [2024-12-06 01:18:55.442402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.884 [2024-12-06 01:18:55.442444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.884 [2024-12-06 01:18:55.455756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.884 [2024-12-06 01:18:55.455796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.884 [2024-12-06 01:18:55.473581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.884 [2024-12-06 01:18:55.473621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.884 [2024-12-06 01:18:55.489039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.884 [2024-12-06 01:18:55.489111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.884 [2024-12-06 01:18:55.504741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.884 [2024-12-06 01:18:55.504780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.884 [2024-12-06 01:18:55.519634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.884 [2024-12-06 01:18:55.519674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.884 [2024-12-06 01:18:55.533624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.884 [2024-12-06 01:18:55.533663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.884 [2024-12-06 01:18:55.550699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.884 [2024-12-06 01:18:55.550739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.884 [2024-12-06 01:18:55.566134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.884 [2024-12-06 01:18:55.566178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.884 [2024-12-06 01:18:55.581773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.884 [2024-12-06 01:18:55.581812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.143 [2024-12-06 01:18:55.597416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.143 [2024-12-06 01:18:55.597456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.143 [2024-12-06 01:18:55.612868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.143 [2024-12-06 01:18:55.612902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.143 [2024-12-06 01:18:55.626525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.143 [2024-12-06 01:18:55.626565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.143 [2024-12-06 01:18:55.642632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.143 [2024-12-06 01:18:55.642671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.143 [2024-12-06 01:18:55.657872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.143 [2024-12-06 01:18:55.657905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.143 [2024-12-06 01:18:55.673399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.143 [2024-12-06 01:18:55.673440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.143 [2024-12-06 01:18:55.689318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.143 [2024-12-06 01:18:55.689358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.143 [2024-12-06 01:18:55.704883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.143 [2024-12-06 01:18:55.704917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.143 [2024-12-06 01:18:55.720440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.143 [2024-12-06 01:18:55.720479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.143 [2024-12-06 01:18:55.735582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.143 [2024-12-06 01:18:55.735621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.143 [2024-12-06 01:18:55.750861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.143 [2024-12-06 01:18:55.750896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.143 [2024-12-06 01:18:55.766338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.143 [2024-12-06 01:18:55.766377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.143 [2024-12-06 01:18:55.781778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.143 [2024-12-06 01:18:55.781831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.143 [2024-12-06 01:18:55.797428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.143 [2024-12-06 01:18:55.797468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.143 [2024-12-06 01:18:55.812797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.143 [2024-12-06 01:18:55.812850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.143 [2024-12-06 01:18:55.828068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.143 [2024-12-06 01:18:55.828120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.143 [2024-12-06 01:18:55.842979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.143 [2024-12-06 01:18:55.843013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.401 [2024-12-06 01:18:55.858522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.401 [2024-12-06 01:18:55.858562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.401 [2024-12-06 01:18:55.873993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.401 [2024-12-06 01:18:55.874028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.401 [2024-12-06 01:18:55.889503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.401 [2024-12-06 01:18:55.889542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.401 [2024-12-06 01:18:55.904759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.401 [2024-12-06 01:18:55.904799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.401 [2024-12-06 01:18:55.920322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.401 [2024-12-06 01:18:55.920361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.401 8478.50 IOPS, 66.24 MiB/s [2024-12-06T00:18:56.110Z] [2024-12-06 01:18:55.935597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.401 [2024-12-06 01:18:55.935637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.401 [2024-12-06 01:18:55.951146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.401 [2024-12-06 01:18:55.951186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.401 [2024-12-06 01:18:55.966470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.401 [2024-12-06 01:18:55.966510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.401 [2024-12-06 01:18:55.981484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.401 [2024-12-06 01:18:55.981524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.401 [2024-12-06 01:18:55.997227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.401 [2024-12-06 01:18:55.997267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.401 [2024-12-06 01:18:56.013205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.401 [2024-12-06 01:18:56.013245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.401 [2024-12-06 01:18:56.028506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.401 [2024-12-06 01:18:56.028545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.401 [2024-12-06 01:18:56.043608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.401 [2024-12-06 01:18:56.043648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.401 [2024-12-06 01:18:56.058676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.401 [2024-12-06 01:18:56.058715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.401 [2024-12-06 01:18:56.074455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.401 [2024-12-06 01:18:56.074496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.401 [2024-12-06 01:18:56.089313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.401 [2024-12-06 01:18:56.089353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.401 [2024-12-06 01:18:56.105173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.401 [2024-12-06 01:18:56.105213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.660 [2024-12-06 01:18:56.121704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.660 [2024-12-06 01:18:56.121744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.660 [2024-12-06 01:18:56.137227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.660 [2024-12-06 01:18:56.137266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.660 [2024-12-06 01:18:56.152881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.660 [2024-12-06 01:18:56.152914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.660 [2024-12-06 01:18:56.169026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.660 [2024-12-06 01:18:56.169059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.660 [2024-12-06 01:18:56.183935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.660 [2024-12-06 01:18:56.183968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.660 [2024-12-06 01:18:56.198864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.660 [2024-12-06 01:18:56.198897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.660 [2024-12-06 01:18:56.213940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.660 [2024-12-06 01:18:56.213974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.660 [2024-12-06 01:18:56.229770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.660 [2024-12-06 01:18:56.229809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.660 [2024-12-06 01:18:56.244753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.660 [2024-12-06 01:18:56.244792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.660 [2024-12-06 01:18:56.260663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.660 [2024-12-06 01:18:56.260703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.660 [2024-12-06 01:18:56.275578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.660 [2024-12-06 01:18:56.275616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.660 [2024-12-06 01:18:56.290318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.660 [2024-12-06 01:18:56.290357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.660 [2024-12-06 01:18:56.305293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.660 [2024-12-06 01:18:56.305333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.660 [2024-12-06 01:18:56.321234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.660 [2024-12-06 01:18:56.321291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.660 [2024-12-06 01:18:56.336119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.660 [2024-12-06 01:18:56.336159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.660 [2024-12-06 01:18:56.350665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.660 [2024-12-06 01:18:56.350704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.660 [2024-12-06 01:18:56.366665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.660 [2024-12-06 01:18:56.366705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.918 [2024-12-06 01:18:56.381754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.919 [2024-12-06 01:18:56.381804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.919 [2024-12-06 01:18:56.397367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.919 [2024-12-06 01:18:56.397406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.919 [2024-12-06 01:18:56.412893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.919 [2024-12-06 01:18:56.412927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.919 [2024-12-06 01:18:56.428808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.919 [2024-12-06 01:18:56.428875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.919 [2024-12-06 01:18:56.443802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.919 [2024-12-06 01:18:56.443867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.919 [2024-12-06 01:18:56.459386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.919 [2024-12-06 01:18:56.459427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.919 [2024-12-06 01:18:56.474217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.919 [2024-12-06 01:18:56.474256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.919 [2024-12-06 01:18:56.489749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.919 [2024-12-06 01:18:56.489788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.919 [2024-12-06 01:18:56.503920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.919 [2024-12-06 01:18:56.503954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.919 [2024-12-06 01:18:56.518535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.919 [2024-12-06 01:18:56.518575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.919 [2024-12-06 01:18:56.533569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.919 [2024-12-06 01:18:56.533609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.919 [2024-12-06 01:18:56.548611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.919 [2024-12-06 01:18:56.548651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.919 [2024-12-06 01:18:56.564520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.919 [2024-12-06 01:18:56.564559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.919 [2024-12-06 01:18:56.580033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.919 [2024-12-06 01:18:56.580067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.919 [2024-12-06 01:18:56.594922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.919 [2024-12-06 01:18:56.594955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.919 [2024-12-06 01:18:56.610172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.919 [2024-12-06 01:18:56.610211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.919 [2024-12-06 01:18:56.625358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.919 [2024-12-06 01:18:56.625397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.177 [2024-12-06 01:18:56.640377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.177 [2024-12-06 01:18:56.640418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.177 [2024-12-06 01:18:56.655458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.177 [2024-12-06 01:18:56.655498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.177 [2024-12-06 01:18:56.670264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.177 [2024-12-06 01:18:56.670391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.177 [2024-12-06 01:18:56.685204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.177 [2024-12-06 01:18:56.685243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.177 [2024-12-06 01:18:56.699550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.177 [2024-12-06 01:18:56.699589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.177 [2024-12-06 01:18:56.714033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.177 [2024-12-06 01:18:56.714066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.177 [2024-12-06 01:18:56.729296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.177 [2024-12-06 01:18:56.729335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.177 [2024-12-06 01:18:56.745102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.177 [2024-12-06 01:18:56.745157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.177 [2024-12-06 01:18:56.759918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.177 [2024-12-06 01:18:56.759951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.177 [2024-12-06 01:18:56.774756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.177 [2024-12-06 01:18:56.774794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.177 [2024-12-06 01:18:56.789202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.177 [2024-12-06 01:18:56.789241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.177 [2024-12-06 01:18:56.805264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.177 [2024-12-06 01:18:56.805303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.177 [2024-12-06 01:18:56.820168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.177 [2024-12-06 01:18:56.820209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.177 [2024-12-06 01:18:56.834359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.177 [2024-12-06 01:18:56.834399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.177 [2024-12-06 01:18:56.850907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.177 [2024-12-06 01:18:56.850943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.177 [2024-12-06 01:18:56.866979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.177 [2024-12-06 01:18:56.867013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.435 [2024-12-06 01:18:56.887115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.435 [2024-12-06 01:18:56.887156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.435 [2024-12-06 01:18:56.900403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.435 [2024-12-06 01:18:56.900443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.435 [2024-12-06 01:18:56.917500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.435 [2024-12-06 01:18:56.917539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.435 [2024-12-06 01:18:56.932591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.435 [2024-12-06 01:18:56.932630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.435 8426.00 IOPS, 65.83 MiB/s [2024-12-06T00:18:57.144Z] [2024-12-06 01:18:56.948334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.435 [2024-12-06 01:18:56.948374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.435 [2024-12-06 01:18:56.964340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.435 [2024-12-06 01:18:56.964390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.435 [2024-12-06 01:18:56.979916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.435 [2024-12-06 01:18:56.979950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.435 [2024-12-06 01:18:56.995227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.435 [2024-12-06 01:18:56.995266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.435 [2024-12-06 01:18:57.010230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.435 [2024-12-06 01:18:57.010270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.435 [2024-12-06 01:18:57.025795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.435 [2024-12-06 01:18:57.025859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.435 [2024-12-06 01:18:57.040934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.435 [2024-12-06 01:18:57.040968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.435 [2024-12-06 01:18:57.056591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.435 [2024-12-06 01:18:57.056632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.435 [2024-12-06 01:18:57.071873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.435 [2024-12-06 01:18:57.071907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.435 [2024-12-06 01:18:57.087050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.435 [2024-12-06 01:18:57.087083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.435 [2024-12-06 01:18:57.102690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.435 [2024-12-06 01:18:57.102729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.435 [2024-12-06 01:18:57.118224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.435 [2024-12-06 01:18:57.118263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.435 [2024-12-06 01:18:57.133384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.435 [2024-12-06 01:18:57.133423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.695 [2024-12-06 01:18:57.148719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.695 [2024-12-06 01:18:57.148758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.695 [2024-12-06 01:18:57.163910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.695 [2024-12-06 01:18:57.163944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.695 [2024-12-06 01:18:57.178517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.695 [2024-12-06 01:18:57.178559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.695 [2024-12-06 01:18:57.193055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.695 [2024-12-06 01:18:57.193089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.695 [2024-12-06 01:18:57.209173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.695 [2024-12-06 01:18:57.209213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.695 [2024-12-06 01:18:57.223087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.695 [2024-12-06 01:18:57.223142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.695 [2024-12-06 01:18:57.238565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.695 [2024-12-06 01:18:57.238604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.695 [2024-12-06 01:18:57.254185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.695 [2024-12-06 01:18:57.254226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.695 [2024-12-06 01:18:57.268998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.695 [2024-12-06 01:18:57.269032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.695 [2024-12-06 01:18:57.284660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.695 [2024-12-06 01:18:57.284699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.695 [2024-12-06 01:18:57.299766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.695 [2024-12-06 01:18:57.299805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.695 [2024-12-06 01:18:57.314618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.695 [2024-12-06 01:18:57.314658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.695 [2024-12-06 01:18:57.330964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.695 [2024-12-06 01:18:57.330998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.695 [2024-12-06 01:18:57.345839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.695 [2024-12-06 01:18:57.345891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.695 [2024-12-06 01:18:57.360834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.695 [2024-12-06 01:18:57.360905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.695 [2024-12-06 01:18:57.376646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.695 [2024-12-06 01:18:57.376685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.695 [2024-12-06 01:18:57.390997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.695 [2024-12-06 01:18:57.391031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.955 [2024-12-06 01:18:57.406994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.955 [2024-12-06 01:18:57.407043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.955 [2024-12-06 01:18:57.422487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.955 [2024-12-06 01:18:57.422527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.955 [2024-12-06 01:18:57.437826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.955 [2024-12-06 01:18:57.437878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.955 [2024-12-06 01:18:57.452649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.955 [2024-12-06 01:18:57.452688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.955 [2024-12-06 01:18:57.467824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.955 [2024-12-06 01:18:57.467881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.955 [2024-12-06 01:18:57.483204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.955 [2024-12-06 01:18:57.483243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.955 [2024-12-06 01:18:57.498439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.955 [2024-12-06 01:18:57.498479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.955 [2024-12-06 01:18:57.513933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.955 [2024-12-06 01:18:57.513967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.955 [2024-12-06 01:18:57.528876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.955 [2024-12-06 01:18:57.528911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.955 [2024-12-06 01:18:57.544331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.955 [2024-12-06 01:18:57.544371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.955 [2024-12-06 01:18:57.559366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.955 [2024-12-06 01:18:57.559406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.955 [2024-12-06 01:18:57.574341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.955 [2024-12-06 01:18:57.574382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.955 [2024-12-06 01:18:57.589024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.955 [2024-12-06 01:18:57.589059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.955 [2024-12-06 01:18:57.605003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.955 [2024-12-06 01:18:57.605038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.955 [2024-12-06 01:18:57.620298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.955 [2024-12-06 01:18:57.620338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.955 [2024-12-06 01:18:57.637046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.955 [2024-12-06 01:18:57.637081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.955 [2024-12-06 01:18:57.652828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.955 [2024-12-06 01:18:57.652888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.216 [2024-12-06 01:18:57.668528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.216 [2024-12-06 01:18:57.668568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.216 [2024-12-06 01:18:57.683842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.216 [2024-12-06 01:18:57.683891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.216 [2024-12-06 01:18:57.699666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.216 [2024-12-06 01:18:57.699706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.216 [2024-12-06 01:18:57.714916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.216 [2024-12-06 01:18:57.714951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.216 [2024-12-06 01:18:57.730272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.216 [2024-12-06 01:18:57.730312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.216 [2024-12-06 01:18:57.745756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.216 [2024-12-06 01:18:57.745796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.216 [2024-12-06 01:18:57.760513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.216 [2024-12-06 01:18:57.760552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.216 [2024-12-06 01:18:57.776201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.216 [2024-12-06 01:18:57.776241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.216 [2024-12-06 01:18:57.791890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.216 [2024-12-06 01:18:57.791927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.216 [2024-12-06 01:18:57.807364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.216 [2024-12-06 01:18:57.807404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.216 [2024-12-06 01:18:57.822958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.216 [2024-12-06 01:18:57.822992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.216 [2024-12-06 01:18:57.837993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.216 [2024-12-06 01:18:57.838027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.216 [2024-12-06 01:18:57.853663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.216 [2024-12-06 01:18:57.853703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.216 [2024-12-06 01:18:57.868680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.216 [2024-12-06 01:18:57.868720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.216 [2024-12-06 01:18:57.883954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.216 [2024-12-06 01:18:57.883988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.216 [2024-12-06 01:18:57.898672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.216 [2024-12-06 01:18:57.898713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.216 [2024-12-06 01:18:57.914114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.216 [2024-12-06 01:18:57.914157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.476 [2024-12-06 01:18:57.929997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.476 [2024-12-06 01:18:57.930033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.476 8383.50 IOPS, 65.50 MiB/s [2024-12-06T00:18:58.185Z] [2024-12-06 01:18:57.945617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.476 [2024-12-06 01:18:57.945658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.476 [2024-12-06 01:18:57.961060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.476 [2024-12-06 01:18:57.961123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.476 [2024-12-06 01:18:57.977120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.476 [2024-12-06 01:18:57.977153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.476 [2024-12-06 01:18:57.992292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.476 [2024-12-06 01:18:57.992334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.476 [2024-12-06 01:18:58.007677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.476 [2024-12-06 01:18:58.007717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.476 [2024-12-06 01:18:58.022995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.476 [2024-12-06 01:18:58.023029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.476 [2024-12-06 01:18:58.038328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.477 [2024-12-06 01:18:58.038368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.477 [2024-12-06 01:18:58.053470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.477 [2024-12-06 01:18:58.053511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.477 [2024-12-06 01:18:58.069513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.477 [2024-12-06 01:18:58.069553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.477 [2024-12-06 01:18:58.085077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.477 [2024-12-06 01:18:58.085129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.477 [2024-12-06 01:18:58.100458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.477 [2024-12-06 01:18:58.100498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.477 [2024-12-06 01:18:58.115594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.477 [2024-12-06 01:18:58.115644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.477 [2024-12-06 01:18:58.130375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.477 [2024-12-06 01:18:58.130415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.477 [2024-12-06 01:18:58.145289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.477 [2024-12-06 01:18:58.145330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.477 [2024-12-06 01:18:58.160969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.477 [2024-12-06 01:18:58.161003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.477 [2024-12-06 01:18:58.176085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.477 [2024-12-06 01:18:58.176139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.735 [2024-12-06 01:18:58.191283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.735 [2024-12-06 01:18:58.191324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.735 [2024-12-06 01:18:58.207535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.735 [2024-12-06 01:18:58.207575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.735 [2024-12-06 01:18:58.222933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.735 [2024-12-06 01:18:58.222967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.735 [2024-12-06 01:18:58.238150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.735 [2024-12-06 01:18:58.238205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.735 [2024-12-06 01:18:58.254213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.735 [2024-12-06 01:18:58.254253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.735 [2024-12-06 01:18:58.269224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.735 [2024-12-06 01:18:58.269264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.735 [2024-12-06 01:18:58.284236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.735 [2024-12-06 01:18:58.284275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.735 [2024-12-06 01:18:58.299316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.735 [2024-12-06 01:18:58.299356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.735 [2024-12-06 01:18:58.314289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.735 [2024-12-06 01:18:58.314329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.735 [2024-12-06 01:18:58.329071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.735 [2024-12-06 01:18:58.329124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.735 [2024-12-06 01:18:58.343972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.735 [2024-12-06 01:18:58.344006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.735 [2024-12-06 01:18:58.360030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.735 [2024-12-06 01:18:58.360064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.735 [2024-12-06 01:18:58.374958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.735 [2024-12-06 01:18:58.374993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.735 [2024-12-06 01:18:58.390249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.735 [2024-12-06 01:18:58.390289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.735 [2024-12-06 01:18:58.405052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.735 [2024-12-06 01:18:58.405095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.735 [2024-12-06 01:18:58.419319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.735 [2024-12-06 01:18:58.419358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.736 [2024-12-06 01:18:58.434720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.736 [2024-12-06 01:18:58.434759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.994 [2024-12-06 01:18:58.450516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.994 [2024-12-06 01:18:58.450557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.994 [2024-12-06 01:18:58.465569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.994 [2024-12-06 01:18:58.465608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.994 [2024-12-06 01:18:58.480915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.994 [2024-12-06 01:18:58.480951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.994 [2024-12-06 01:18:58.496607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.994 [2024-12-06 01:18:58.496647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.994 [2024-12-06 01:18:58.511177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.994 [2024-12-06 01:18:58.511216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.994 [2024-12-06 01:18:58.530237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.994 [2024-12-06 01:18:58.530276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.994 [2024-12-06 01:18:58.544533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.994 [2024-12-06 01:18:58.544572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.994 [2024-12-06 01:18:58.561188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.994 [2024-12-06 01:18:58.561227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.994 [2024-12-06 01:18:58.575886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.994 [2024-12-06 01:18:58.575920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.994 [2024-12-06 01:18:58.591649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.994 [2024-12-06 01:18:58.591689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.994 [2024-12-06 01:18:58.607069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.994 [2024-12-06 01:18:58.607121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.994 [2024-12-06 01:18:58.622618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.994 [2024-12-06 01:18:58.622657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.994 [2024-12-06 01:18:58.638360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.994 [2024-12-06 01:18:58.638400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.994 [2024-12-06 01:18:58.653857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.994 [2024-12-06 01:18:58.653891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.994 [2024-12-06 01:18:58.669176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.994 [2024-12-06 01:18:58.669216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.994 [2024-12-06 01:18:58.684274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.994 [2024-12-06 01:18:58.684314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.994 [2024-12-06 01:18:58.698711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.994 [2024-12-06 01:18:58.698761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.253 [2024-12-06 01:18:58.714395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.253 [2024-12-06 01:18:58.714436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.253 [2024-12-06 01:18:58.729774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.253 [2024-12-06 01:18:58.729827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.253 [2024-12-06 01:18:58.744865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.253 [2024-12-06 01:18:58.744899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.253 [2024-12-06 01:18:58.759410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.253 [2024-12-06 01:18:58.759450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.253 [2024-12-06 01:18:58.774704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.253 [2024-12-06 01:18:58.774743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.253 [2024-12-06 01:18:58.790552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.253 [2024-12-06 01:18:58.790591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.253 [2024-12-06 01:18:58.806513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.253 [2024-12-06 01:18:58.806553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.253 [2024-12-06 01:18:58.823394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.253 [2024-12-06 01:18:58.823433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.253 [2024-12-06 01:18:58.838569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.253 [2024-12-06 01:18:58.838610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.253 [2024-12-06 01:18:58.854276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.253 [2024-12-06 01:18:58.854317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.253 [2024-12-06 01:18:58.869891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.253 [2024-12-06 01:18:58.869925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.253 [2024-12-06 01:18:58.885709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.253 [2024-12-06 01:18:58.885748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.253 [2024-12-06 01:18:58.901335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.253 [2024-12-06 01:18:58.901374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.253 [2024-12-06 01:18:58.916668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.253 [2024-12-06 01:18:58.916707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.253 [2024-12-06 01:18:58.932372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.253 [2024-12-06 01:18:58.932411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.253 8363.40 IOPS, 65.34 MiB/s [2024-12-06T00:18:58.962Z] [2024-12-06 01:18:58.945605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.253 [2024-12-06 01:18:58.945645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.253 00:41:26.253 Latency(us) 00:41:26.253 [2024-12-06T00:18:58.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:26.253 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:41:26.253 Nvme1n1 : 5.01 8373.45 65.42 0.00 0.00 15262.64 3883.61 24660.95 00:41:26.253 [2024-12-06T00:18:58.962Z] =================================================================================================================== 00:41:26.253 [2024-12-06T00:18:58.962Z] Total : 8373.45 65.42 0.00 0.00 15262.64 3883.61 24660.95 00:41:26.253 [2024-12-06 01:18:58.950357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.253 [2024-12-06 01:18:58.950394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.253 [2024-12-06 01:18:58.958330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.253 [2024-12-06 01:18:58.958367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:58.966347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:58.966383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:58.974338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:58.974373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:58.982318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:58.982353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:58.990341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:58.990376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:58.998476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:58.998538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:59.006451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:59.006518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:59.014445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:59.014496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:59.022312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:59.022346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:59.030367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:59.030402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:59.038342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:59.038377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:59.046316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:59.046350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:59.054337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:59.054370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:59.062349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:59.062384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:59.070322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:59.070356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:59.078339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:59.078373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:59.086319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:59.086351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:59.094405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:59.094453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:59.102480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:59.102545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:59.110468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:59.110540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:59.118343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:59.118378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:59.126305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:59.126333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.512 [2024-12-06 01:18:59.134318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.512 [2024-12-06 01:18:59.134353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.513 [2024-12-06 01:18:59.142351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.513 [2024-12-06 01:18:59.142386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.513 [2024-12-06 01:18:59.150310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.513 [2024-12-06 01:18:59.150344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.513 [2024-12-06 01:18:59.158336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.513 [2024-12-06 01:18:59.158371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.513 [2024-12-06 01:18:59.166353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.513 [2024-12-06 01:18:59.166387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.513 [2024-12-06 01:18:59.174315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.513 [2024-12-06 01:18:59.174349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.513 [2024-12-06 01:18:59.182390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.513 [2024-12-06 01:18:59.182425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.513 [2024-12-06 01:18:59.190346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.513 [2024-12-06 01:18:59.190380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.513 [2024-12-06 01:18:59.198341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.513 [2024-12-06 01:18:59.198375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.513 [2024-12-06 01:18:59.206366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.513 [2024-12-06 01:18:59.206400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.513 [2024-12-06 01:18:59.214326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.513 [2024-12-06 01:18:59.214359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.771 [2024-12-06 01:18:59.222363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.771 [2024-12-06 01:18:59.222397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.771 [2024-12-06 01:18:59.230336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.771 [2024-12-06 01:18:59.230370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.771 [2024-12-06 01:18:59.238310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.771 [2024-12-06 01:18:59.238352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.771 [2024-12-06 01:18:59.246335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.771 [2024-12-06 01:18:59.246369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.771 [2024-12-06 01:18:59.254348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.771 [2024-12-06 01:18:59.254383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.771 [2024-12-06 01:18:59.262311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.771 [2024-12-06 01:18:59.262344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.771 [2024-12-06 01:18:59.270351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.771 [2024-12-06 01:18:59.270387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.771 [2024-12-06 01:18:59.282471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.771 [2024-12-06 01:18:59.282543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.771 [2024-12-06 01:18:59.290384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.771 [2024-12-06 01:18:59.290418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.771 [2024-12-06 01:18:59.298336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.771 [2024-12-06 01:18:59.298369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.771 [2024-12-06 01:18:59.306335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.771 [2024-12-06 01:18:59.306368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.771 [2024-12-06 01:18:59.314339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.771 [2024-12-06 01:18:59.314372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.771 [2024-12-06 01:18:59.322340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.771 [2024-12-06 01:18:59.322373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.771 [2024-12-06 01:18:59.330317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.771 [2024-12-06 01:18:59.330350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.771 [2024-12-06 01:18:59.338479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.771 [2024-12-06 01:18:59.338543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.771 [2024-12-06 01:18:59.346447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.771 [2024-12-06 01:18:59.346509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.771 [2024-12-06 01:18:59.354509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.771 [2024-12-06 01:18:59.354578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.771 [2024-12-06 01:18:59.362353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.771 [2024-12-06 01:18:59.362387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.772 [2024-12-06 01:18:59.370310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.772 [2024-12-06 01:18:59.370344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.772 [2024-12-06 01:18:59.378348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.772 [2024-12-06 01:18:59.378383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.772 [2024-12-06 01:18:59.386346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.772 [2024-12-06 01:18:59.386380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.772 [2024-12-06 01:18:59.394331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.772 [2024-12-06 01:18:59.394372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.772 [2024-12-06 01:18:59.402340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.772 [2024-12-06 01:18:59.402375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.772 [2024-12-06 01:18:59.410307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.772 [2024-12-06 01:18:59.410341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.772 [2024-12-06 01:18:59.418350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.772 [2024-12-06 01:18:59.418384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.772 [2024-12-06 01:18:59.426337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.772 [2024-12-06 01:18:59.426370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.772 [2024-12-06 01:18:59.434316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.772 [2024-12-06 01:18:59.434349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.772 [2024-12-06 01:18:59.442354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.772 [2024-12-06 01:18:59.442389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.772 [2024-12-06 01:18:59.450342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.772 [2024-12-06 01:18:59.450377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.772 [2024-12-06 01:18:59.458315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.772 [2024-12-06 01:18:59.458350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.772 [2024-12-06 01:18:59.466349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.772 [2024-12-06 01:18:59.466383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.772 [2024-12-06 01:18:59.474311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.772 [2024-12-06 01:18:59.474345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.482345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.482380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.490305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.490334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.498364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.498398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.506331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.506362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.514468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.514528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.522346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.522385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.530345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.530379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.538310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.538345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.546346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.546389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.554352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.554386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.562315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.562349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.570335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.570369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.578342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.578376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.586310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.586344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.594358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.594392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.602321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.602355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.610338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.610372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.618347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.618381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.626320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.626352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.634479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.634547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.642340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.642375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.650312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.650345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.658344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.658378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.032 [2024-12-06 01:18:59.666313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.032 [2024-12-06 01:18:59.666346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.033 [2024-12-06 01:18:59.674338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.033 [2024-12-06 01:18:59.674372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.033 [2024-12-06 01:18:59.682348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.033 [2024-12-06 01:18:59.682383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.033 [2024-12-06 01:18:59.690342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.033 [2024-12-06 01:18:59.690375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.033 [2024-12-06 01:18:59.698342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.033 [2024-12-06 01:18:59.698387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.033 [2024-12-06 01:18:59.706337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.033 [2024-12-06 01:18:59.706372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.033 [2024-12-06 01:18:59.714314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.033 [2024-12-06 01:18:59.714348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.033 [2024-12-06 01:18:59.722363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.033 [2024-12-06 01:18:59.722408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.033 [2024-12-06 01:18:59.730425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.033 [2024-12-06 01:18:59.730488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.033 [2024-12-06 01:18:59.738361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.033 [2024-12-06 01:18:59.738396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.293 [2024-12-06 01:18:59.746341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.293 [2024-12-06 01:18:59.746375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.293 [2024-12-06 01:18:59.754323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.293 [2024-12-06 01:18:59.754356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.293 [2024-12-06 01:18:59.762344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.293 [2024-12-06 01:18:59.762378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.293 [2024-12-06 01:18:59.770341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.293 [2024-12-06 01:18:59.770375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.293 [2024-12-06 01:18:59.778296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.293 [2024-12-06 01:18:59.778324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.293 [2024-12-06 01:18:59.786357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.293 [2024-12-06 01:18:59.786391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.293 [2024-12-06 01:18:59.794311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.293 [2024-12-06 01:18:59.794344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.293 [2024-12-06 01:18:59.802342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.293 [2024-12-06 01:18:59.802375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.293 [2024-12-06 01:18:59.810347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.293 [2024-12-06 01:18:59.810381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.293 [2024-12-06 01:18:59.818325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.293 [2024-12-06 01:18:59.818358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.293 [2024-12-06 01:18:59.826362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.293 [2024-12-06 01:18:59.826402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.293 [2024-12-06 01:18:59.834363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.293 [2024-12-06 01:18:59.834397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.293 [2024-12-06 01:18:59.842315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.293 [2024-12-06 01:18:59.842349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.293 [2024-12-06 01:18:59.850338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.293 [2024-12-06 01:18:59.850383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.293 [2024-12-06 01:18:59.858336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.293 [2024-12-06 01:18:59.858370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.293 [2024-12-06 01:18:59.866351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:27.293 [2024-12-06 01:18:59.866385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:27.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3183326) - No such process 00:41:27.293 01:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3183326 00:41:27.293 01:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:27.293 01:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.293 01:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:27.293 01:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.293 01:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:27.293 01:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.293 01:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:27.293 delay0 00:41:27.293 01:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.293 01:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:41:27.293 01:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.293 01:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:27.293 01:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.293 01:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:41:27.551 [2024-12-06 01:19:00.003838] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:41:35.673 Initializing NVMe Controllers 00:41:35.673 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:35.673 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:35.673 Initialization complete. Launching workers. 00:41:35.673 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 218, failed: 18715 00:41:35.673 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 18793, failed to submit 140 00:41:35.673 success 18727, unsuccessful 66, failed 0 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:35.673 rmmod nvme_tcp 00:41:35.673 rmmod nvme_fabrics 00:41:35.673 rmmod nvme_keyring 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3181865 ']' 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3181865 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3181865 ']' 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3181865 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3181865 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3181865' 00:41:35.673 killing process with pid 3181865 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3181865 00:41:35.673 01:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3181865 00:41:35.932 01:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:35.932 01:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:35.932 01:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:35.932 01:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:41:35.932 01:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:41:35.932 01:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:35.932 01:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:41:35.932 01:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:35.932 01:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:35.932 01:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:35.932 01:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:35.932 01:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:38.470 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:38.470 00:41:38.470 real 0m33.204s 00:41:38.470 user 0m47.859s 00:41:38.470 sys 0m10.156s 00:41:38.470 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:38.470 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:38.470 ************************************ 00:41:38.470 END TEST nvmf_zcopy 00:41:38.470 ************************************ 00:41:38.470 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:38.470 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:38.470 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:38.470 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:38.470 ************************************ 00:41:38.470 START TEST nvmf_nmic 00:41:38.470 ************************************ 00:41:38.470 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:38.470 * Looking for test storage... 00:41:38.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:38.470 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:38.470 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:41:38.470 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:38.470 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:38.470 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:38.470 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:38.470 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:38.470 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:41:38.470 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:38.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:38.471 --rc genhtml_branch_coverage=1 00:41:38.471 --rc genhtml_function_coverage=1 00:41:38.471 --rc genhtml_legend=1 00:41:38.471 --rc geninfo_all_blocks=1 00:41:38.471 --rc geninfo_unexecuted_blocks=1 00:41:38.471 00:41:38.471 ' 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:38.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:38.471 --rc genhtml_branch_coverage=1 00:41:38.471 --rc genhtml_function_coverage=1 00:41:38.471 --rc genhtml_legend=1 00:41:38.471 --rc geninfo_all_blocks=1 00:41:38.471 --rc geninfo_unexecuted_blocks=1 00:41:38.471 00:41:38.471 ' 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:38.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:38.471 --rc genhtml_branch_coverage=1 00:41:38.471 --rc genhtml_function_coverage=1 00:41:38.471 --rc genhtml_legend=1 00:41:38.471 --rc geninfo_all_blocks=1 00:41:38.471 --rc geninfo_unexecuted_blocks=1 00:41:38.471 00:41:38.471 ' 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:38.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:38.471 --rc genhtml_branch_coverage=1 00:41:38.471 --rc genhtml_function_coverage=1 00:41:38.471 --rc genhtml_legend=1 00:41:38.471 --rc geninfo_all_blocks=1 00:41:38.471 --rc geninfo_unexecuted_blocks=1 00:41:38.471 00:41:38.471 ' 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:38.471 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:38.472 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:38.472 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:38.472 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:38.472 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:38.472 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:41:38.472 01:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:40.379 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:40.379 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:40.379 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:40.379 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:40.379 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:40.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:40.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:41:40.379 00:41:40.379 --- 10.0.0.2 ping statistics --- 00:41:40.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:40.380 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:40.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:40.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:41:40.380 00:41:40.380 --- 10.0.0.1 ping statistics --- 00:41:40.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:40.380 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3186980 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3186980 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3186980 ']' 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:40.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:40.380 01:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.380 [2024-12-06 01:19:12.952451] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:40.380 [2024-12-06 01:19:12.954962] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:41:40.380 [2024-12-06 01:19:12.955051] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:40.639 [2024-12-06 01:19:13.096074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:40.639 [2024-12-06 01:19:13.236324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:40.639 [2024-12-06 01:19:13.236417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:40.639 [2024-12-06 01:19:13.236447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:40.639 [2024-12-06 01:19:13.236469] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:40.639 [2024-12-06 01:19:13.236492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:40.639 [2024-12-06 01:19:13.239331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:40.639 [2024-12-06 01:19:13.239400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:40.639 [2024-12-06 01:19:13.239479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:40.639 [2024-12-06 01:19:13.239490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:40.899 [2024-12-06 01:19:13.572912] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:40.899 [2024-12-06 01:19:13.573937] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:40.899 [2024-12-06 01:19:13.575074] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:40.899 [2024-12-06 01:19:13.575837] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:40.899 [2024-12-06 01:19:13.576186] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:41.467 01:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:41.467 01:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:41:41.467 01:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:41.467 01:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:41.467 01:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:41.467 01:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:41.467 01:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:41.467 01:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.467 01:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:41.467 [2024-12-06 01:19:13.948582] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:41.467 01:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.467 01:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:41.467 01:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.467 01:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:41.467 Malloc0 00:41:41.467 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.467 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:41.467 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.467 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:41.467 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.467 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:41.467 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.467 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:41.467 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.467 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:41.468 [2024-12-06 01:19:14.056774] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:41:41.468 test case1: single bdev can't be used in multiple subsystems 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:41.468 [2024-12-06 01:19:14.080459] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:41:41.468 [2024-12-06 01:19:14.080521] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:41:41.468 [2024-12-06 01:19:14.080547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.468 request: 00:41:41.468 { 00:41:41.468 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:41:41.468 "namespace": { 00:41:41.468 "bdev_name": "Malloc0", 00:41:41.468 "no_auto_visible": false, 00:41:41.468 "hide_metadata": false 00:41:41.468 }, 00:41:41.468 "method": "nvmf_subsystem_add_ns", 00:41:41.468 "req_id": 1 00:41:41.468 } 00:41:41.468 Got JSON-RPC error response 00:41:41.468 response: 00:41:41.468 { 00:41:41.468 "code": -32602, 00:41:41.468 "message": "Invalid parameters" 00:41:41.468 } 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:41:41.468 Adding namespace failed - expected result. 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:41:41.468 test case2: host connect to nvmf target in multiple paths 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:41.468 [2024-12-06 01:19:14.088534] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.468 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:41.726 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:41:41.986 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:41:41.986 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:41:41.986 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:41.986 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:41.986 01:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:41:43.891 01:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:43.891 01:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:43.891 01:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:43.891 01:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:43.891 01:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:43.891 01:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:41:43.891 01:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:43.891 [global] 00:41:43.891 thread=1 00:41:43.891 invalidate=1 00:41:43.891 rw=write 00:41:43.891 time_based=1 00:41:43.891 runtime=1 00:41:43.891 ioengine=libaio 00:41:43.891 direct=1 00:41:43.891 bs=4096 00:41:43.891 iodepth=1 00:41:43.891 norandommap=0 00:41:43.891 numjobs=1 00:41:43.891 00:41:43.891 verify_dump=1 00:41:43.891 verify_backlog=512 00:41:43.891 verify_state_save=0 00:41:43.891 do_verify=1 00:41:43.891 verify=crc32c-intel 00:41:43.891 [job0] 00:41:43.891 filename=/dev/nvme0n1 00:41:44.149 Could not set queue depth (nvme0n1) 00:41:44.149 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:44.149 fio-3.35 00:41:44.149 Starting 1 thread 00:41:45.530 00:41:45.530 job0: (groupid=0, jobs=1): err= 0: pid=3187605: Fri Dec 6 01:19:17 2024 00:41:45.530 read: IOPS=21, BW=86.8KiB/s (88.9kB/s)(88.0KiB/1014msec) 00:41:45.530 slat (nsec): min=10191, max=35981, avg=26115.14, stdev=9536.19 00:41:45.530 clat (usec): min=337, max=41207, avg=39113.01, stdev=8661.30 00:41:45.530 lat (usec): min=356, max=41235, avg=39139.12, stdev=8662.79 00:41:45.530 clat percentiles (usec): 00:41:45.530 | 1.00th=[ 338], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:41:45.530 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:45.530 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:45.530 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:45.530 | 99.99th=[41157] 00:41:45.530 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:41:45.530 slat (usec): min=7, max=31955, avg=71.46, stdev=1411.85 00:41:45.530 clat (usec): min=178, max=399, avg=223.71, stdev=31.36 00:41:45.530 lat (usec): min=186, max=32214, avg=295.17, stdev=1413.78 00:41:45.530 clat percentiles (usec): 00:41:45.530 | 1.00th=[ 182], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 198], 00:41:45.530 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:41:45.530 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 258], 00:41:45.530 | 99.00th=[ 388], 99.50th=[ 396], 99.90th=[ 400], 99.95th=[ 400], 00:41:45.530 | 99.99th=[ 400] 00:41:45.530 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:41:45.530 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:45.530 lat (usec) : 250=88.95%, 500=7.12% 00:41:45.530 lat (msec) : 50=3.93% 00:41:45.530 cpu : usr=0.49%, sys=0.49%, ctx=538, majf=0, minf=1 00:41:45.530 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:45.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:45.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:45.530 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:45.530 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:45.530 00:41:45.530 Run status group 0 (all jobs): 00:41:45.530 READ: bw=86.8KiB/s (88.9kB/s), 86.8KiB/s-86.8KiB/s (88.9kB/s-88.9kB/s), io=88.0KiB (90.1kB), run=1014-1014msec 00:41:45.530 WRITE: bw=2020KiB/s (2068kB/s), 2020KiB/s-2020KiB/s (2068kB/s-2068kB/s), io=2048KiB (2097kB), run=1014-1014msec 00:41:45.530 00:41:45.530 Disk stats (read/write): 00:41:45.530 nvme0n1: ios=44/512, merge=0/0, ticks=1699/113, in_queue=1812, util=98.90% 00:41:45.530 01:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:45.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:45.788 rmmod nvme_tcp 00:41:45.788 rmmod nvme_fabrics 00:41:45.788 rmmod nvme_keyring 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3186980 ']' 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3186980 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3186980 ']' 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3186980 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3186980 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3186980' 00:41:45.788 killing process with pid 3186980 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3186980 00:41:45.788 01:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3186980 00:41:47.163 01:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:47.163 01:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:47.163 01:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:47.163 01:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:41:47.163 01:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:41:47.163 01:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:47.163 01:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:41:47.163 01:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:47.163 01:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:47.163 01:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:47.163 01:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:47.163 01:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:49.702 01:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:49.702 00:41:49.702 real 0m11.203s 00:41:49.702 user 0m19.820s 00:41:49.702 sys 0m3.469s 00:41:49.702 01:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:49.702 01:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:49.702 ************************************ 00:41:49.702 END TEST nvmf_nmic 00:41:49.702 ************************************ 00:41:49.702 01:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:49.702 01:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:49.702 01:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:49.702 01:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:49.702 ************************************ 00:41:49.702 START TEST nvmf_fio_target 00:41:49.702 ************************************ 00:41:49.702 01:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:49.702 * Looking for test storage... 00:41:49.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:49.702 01:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:49.702 01:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:41:49.702 01:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:41:49.702 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:49.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:49.703 --rc genhtml_branch_coverage=1 00:41:49.703 --rc genhtml_function_coverage=1 00:41:49.703 --rc genhtml_legend=1 00:41:49.703 --rc geninfo_all_blocks=1 00:41:49.703 --rc geninfo_unexecuted_blocks=1 00:41:49.703 00:41:49.703 ' 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:49.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:49.703 --rc genhtml_branch_coverage=1 00:41:49.703 --rc genhtml_function_coverage=1 00:41:49.703 --rc genhtml_legend=1 00:41:49.703 --rc geninfo_all_blocks=1 00:41:49.703 --rc geninfo_unexecuted_blocks=1 00:41:49.703 00:41:49.703 ' 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:49.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:49.703 --rc genhtml_branch_coverage=1 00:41:49.703 --rc genhtml_function_coverage=1 00:41:49.703 --rc genhtml_legend=1 00:41:49.703 --rc geninfo_all_blocks=1 00:41:49.703 --rc geninfo_unexecuted_blocks=1 00:41:49.703 00:41:49.703 ' 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:49.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:49.703 --rc genhtml_branch_coverage=1 00:41:49.703 --rc genhtml_function_coverage=1 00:41:49.703 --rc genhtml_legend=1 00:41:49.703 --rc geninfo_all_blocks=1 00:41:49.703 --rc geninfo_unexecuted_blocks=1 00:41:49.703 00:41:49.703 ' 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:41:49.703 01:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:51.606 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:51.606 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:51.606 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:51.606 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:51.606 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:51.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:51.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:41:51.607 00:41:51.607 --- 10.0.0.2 ping statistics --- 00:41:51.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:51.607 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:51.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:51.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:41:51.607 00:41:51.607 --- 10.0.0.1 ping statistics --- 00:41:51.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:51.607 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3189816 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3189816 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3189816 ']' 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:51.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:51.607 01:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:51.867 [2024-12-06 01:19:24.315521] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:51.867 [2024-12-06 01:19:24.318738] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:41:51.867 [2024-12-06 01:19:24.318866] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:51.867 [2024-12-06 01:19:24.492416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:52.127 [2024-12-06 01:19:24.639033] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:52.127 [2024-12-06 01:19:24.639126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:52.127 [2024-12-06 01:19:24.639155] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:52.127 [2024-12-06 01:19:24.639177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:52.127 [2024-12-06 01:19:24.639199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:52.127 [2024-12-06 01:19:24.642099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:52.127 [2024-12-06 01:19:24.642160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:52.127 [2024-12-06 01:19:24.642207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:52.127 [2024-12-06 01:19:24.642219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:52.386 [2024-12-06 01:19:25.036443] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:52.386 [2024-12-06 01:19:25.037591] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:52.386 [2024-12-06 01:19:25.038879] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:52.386 [2024-12-06 01:19:25.039810] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:52.386 [2024-12-06 01:19:25.040162] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:52.948 01:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:52.948 01:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:41:52.949 01:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:52.949 01:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:52.949 01:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:52.949 01:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:52.949 01:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:52.949 [2024-12-06 01:19:25.655361] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:53.206 01:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:53.491 01:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:41:53.491 01:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:53.773 01:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:41:53.773 01:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:54.341 01:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:41:54.341 01:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:54.599 01:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:41:54.599 01:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:41:54.857 01:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:55.424 01:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:41:55.424 01:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:55.682 01:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:41:55.682 01:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:55.948 01:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:41:55.948 01:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:41:56.205 01:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:56.463 01:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:56.463 01:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:56.720 01:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:56.720 01:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:41:57.285 01:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:57.542 [2024-12-06 01:19:30.007527] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:57.542 01:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:41:57.800 01:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:41:58.059 01:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:58.319 01:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:41:58.319 01:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:41:58.319 01:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:58.319 01:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:41:58.319 01:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:41:58.319 01:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:42:00.221 01:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:42:00.221 01:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:42:00.221 01:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:42:00.221 01:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:42:00.221 01:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:42:00.221 01:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:42:00.221 01:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:42:00.221 [global] 00:42:00.221 thread=1 00:42:00.221 invalidate=1 00:42:00.221 rw=write 00:42:00.221 time_based=1 00:42:00.221 runtime=1 00:42:00.221 ioengine=libaio 00:42:00.221 direct=1 00:42:00.221 bs=4096 00:42:00.221 iodepth=1 00:42:00.221 norandommap=0 00:42:00.221 numjobs=1 00:42:00.221 00:42:00.221 verify_dump=1 00:42:00.221 verify_backlog=512 00:42:00.221 verify_state_save=0 00:42:00.221 do_verify=1 00:42:00.221 verify=crc32c-intel 00:42:00.221 [job0] 00:42:00.221 filename=/dev/nvme0n1 00:42:00.221 [job1] 00:42:00.221 filename=/dev/nvme0n2 00:42:00.221 [job2] 00:42:00.221 filename=/dev/nvme0n3 00:42:00.221 [job3] 00:42:00.221 filename=/dev/nvme0n4 00:42:00.221 Could not set queue depth (nvme0n1) 00:42:00.221 Could not set queue depth (nvme0n2) 00:42:00.222 Could not set queue depth (nvme0n3) 00:42:00.222 Could not set queue depth (nvme0n4) 00:42:00.479 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:00.480 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:00.480 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:00.480 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:00.480 fio-3.35 00:42:00.480 Starting 4 threads 00:42:01.857 00:42:01.857 job0: (groupid=0, jobs=1): err= 0: pid=3191015: Fri Dec 6 01:19:34 2024 00:42:01.857 read: IOPS=1343, BW=5375KiB/s (5504kB/s)(5380KiB/1001msec) 00:42:01.857 slat (nsec): min=4660, max=37432, avg=11151.92, stdev=5683.52 00:42:01.857 clat (usec): min=264, max=1483, avg=375.32, stdev=86.40 00:42:01.857 lat (usec): min=274, max=1493, avg=386.47, stdev=87.65 00:42:01.857 clat percentiles (usec): 00:42:01.857 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 306], 00:42:01.857 | 30.00th=[ 318], 40.00th=[ 338], 50.00th=[ 367], 60.00th=[ 383], 00:42:01.857 | 70.00th=[ 396], 80.00th=[ 429], 90.00th=[ 474], 95.00th=[ 519], 00:42:01.857 | 99.00th=[ 635], 99.50th=[ 709], 99.90th=[ 1156], 99.95th=[ 1483], 00:42:01.857 | 99.99th=[ 1483] 00:42:01.857 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:42:01.857 slat (nsec): min=6788, max=55062, avg=13473.27, stdev=6759.76 00:42:01.857 clat (usec): min=181, max=503, avg=293.28, stdev=61.43 00:42:01.857 lat (usec): min=194, max=511, avg=306.75, stdev=59.12 00:42:01.857 clat percentiles (usec): 00:42:01.857 | 1.00th=[ 200], 5.00th=[ 215], 10.00th=[ 223], 20.00th=[ 235], 00:42:01.857 | 30.00th=[ 243], 40.00th=[ 255], 50.00th=[ 277], 60.00th=[ 322], 00:42:01.857 | 70.00th=[ 343], 80.00th=[ 359], 90.00th=[ 379], 95.00th=[ 383], 00:42:01.857 | 99.00th=[ 412], 99.50th=[ 433], 99.90th=[ 494], 99.95th=[ 502], 00:42:01.857 | 99.99th=[ 502] 00:42:01.857 bw ( KiB/s): min= 6936, max= 6936, per=38.42%, avg=6936.00, stdev= 0.00, samples=1 00:42:01.857 iops : min= 1734, max= 1734, avg=1734.00, stdev= 0.00, samples=1 00:42:01.857 lat (usec) : 250=19.72%, 500=77.20%, 750=2.92%, 1000=0.10% 00:42:01.857 lat (msec) : 2=0.07% 00:42:01.857 cpu : usr=2.40%, sys=4.50%, ctx=2884, majf=0, minf=1 00:42:01.857 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:01.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.857 issued rwts: total=1345,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.857 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:01.857 job1: (groupid=0, jobs=1): err= 0: pid=3191016: Fri Dec 6 01:19:34 2024 00:42:01.857 read: IOPS=1448, BW=5794KiB/s (5933kB/s)(5800KiB/1001msec) 00:42:01.857 slat (nsec): min=4776, max=54176, avg=15174.78, stdev=8676.27 00:42:01.857 clat (usec): min=252, max=3248, avg=390.02, stdev=108.59 00:42:01.857 lat (usec): min=265, max=3254, avg=405.19, stdev=108.78 00:42:01.857 clat percentiles (usec): 00:42:01.857 | 1.00th=[ 265], 5.00th=[ 277], 10.00th=[ 289], 20.00th=[ 322], 00:42:01.857 | 30.00th=[ 334], 40.00th=[ 363], 50.00th=[ 388], 60.00th=[ 396], 00:42:01.857 | 70.00th=[ 408], 80.00th=[ 449], 90.00th=[ 502], 95.00th=[ 537], 00:42:01.857 | 99.00th=[ 619], 99.50th=[ 644], 99.90th=[ 742], 99.95th=[ 3261], 00:42:01.857 | 99.99th=[ 3261] 00:42:01.857 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:42:01.857 slat (nsec): min=6307, max=42263, avg=10854.90, stdev=4859.32 00:42:01.857 clat (usec): min=184, max=550, avg=250.65, stdev=62.73 00:42:01.857 lat (usec): min=191, max=560, avg=261.50, stdev=63.17 00:42:01.858 clat percentiles (usec): 00:42:01.858 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:42:01.858 | 30.00th=[ 212], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:42:01.858 | 70.00th=[ 247], 80.00th=[ 281], 90.00th=[ 388], 95.00th=[ 392], 00:42:01.858 | 99.00th=[ 400], 99.50th=[ 408], 99.90th=[ 474], 99.95th=[ 553], 00:42:01.858 | 99.99th=[ 553] 00:42:01.858 bw ( KiB/s): min= 7736, max= 7736, per=42.85%, avg=7736.00, stdev= 0.00, samples=1 00:42:01.858 iops : min= 1934, max= 1934, avg=1934.00, stdev= 0.00, samples=1 00:42:01.858 lat (usec) : 250=37.47%, 500=57.60%, 750=4.89% 00:42:01.858 lat (msec) : 4=0.03% 00:42:01.858 cpu : usr=2.00%, sys=4.00%, ctx=2988, majf=0, minf=1 00:42:01.858 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:01.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.858 issued rwts: total=1450,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.858 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:01.858 job2: (groupid=0, jobs=1): err= 0: pid=3191019: Fri Dec 6 01:19:34 2024 00:42:01.858 read: IOPS=70, BW=283KiB/s (290kB/s)(284KiB/1002msec) 00:42:01.858 slat (nsec): min=7283, max=36954, avg=15733.13, stdev=7881.91 00:42:01.858 clat (usec): min=318, max=41954, avg=12265.77, stdev=18304.70 00:42:01.858 lat (usec): min=330, max=41983, avg=12281.50, stdev=18304.60 00:42:01.858 clat percentiles (usec): 00:42:01.858 | 1.00th=[ 318], 5.00th=[ 351], 10.00th=[ 363], 20.00th=[ 367], 00:42:01.858 | 30.00th=[ 388], 40.00th=[ 408], 50.00th=[ 420], 60.00th=[ 482], 00:42:01.858 | 70.00th=[10159], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:01.858 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:01.858 | 99.99th=[42206] 00:42:01.858 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:42:01.858 slat (nsec): min=7702, max=31209, avg=9847.53, stdev=3446.72 00:42:01.858 clat (usec): min=208, max=899, avg=239.95, stdev=39.14 00:42:01.858 lat (usec): min=216, max=908, avg=249.79, stdev=39.42 00:42:01.858 clat percentiles (usec): 00:42:01.858 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 223], 00:42:01.858 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 241], 00:42:01.858 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 258], 95.00th=[ 265], 00:42:01.858 | 99.00th=[ 371], 99.50th=[ 457], 99.90th=[ 898], 99.95th=[ 898], 00:42:01.858 | 99.99th=[ 898] 00:42:01.858 bw ( KiB/s): min= 4096, max= 4096, per=22.69%, avg=4096.00, stdev= 0.00, samples=1 00:42:01.858 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:01.858 lat (usec) : 250=69.81%, 500=25.04%, 750=0.69%, 1000=0.17% 00:42:01.858 lat (msec) : 2=0.51%, 20=0.34%, 50=3.43% 00:42:01.858 cpu : usr=0.50%, sys=0.70%, ctx=583, majf=0, minf=1 00:42:01.858 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:01.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.858 issued rwts: total=71,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.858 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:01.858 job3: (groupid=0, jobs=1): err= 0: pid=3191020: Fri Dec 6 01:19:34 2024 00:42:01.858 read: IOPS=742, BW=2970KiB/s (3041kB/s)(3032KiB/1021msec) 00:42:01.858 slat (nsec): min=6062, max=38210, avg=8636.29, stdev=3913.99 00:42:01.858 clat (usec): min=292, max=41469, avg=909.93, stdev=4653.33 00:42:01.858 lat (usec): min=301, max=41482, avg=918.57, stdev=4655.24 00:42:01.858 clat percentiles (usec): 00:42:01.858 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 322], 00:42:01.858 | 30.00th=[ 326], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 355], 00:42:01.858 | 70.00th=[ 375], 80.00th=[ 400], 90.00th=[ 461], 95.00th=[ 545], 00:42:01.858 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:42:01.858 | 99.99th=[41681] 00:42:01.858 write: IOPS=1002, BW=4012KiB/s (4108kB/s)(4096KiB/1021msec); 0 zone resets 00:42:01.858 slat (nsec): min=7996, max=32114, avg=10664.81, stdev=3798.49 00:42:01.858 clat (usec): min=222, max=537, avg=301.52, stdev=52.47 00:42:01.858 lat (usec): min=231, max=558, avg=312.18, stdev=52.40 00:42:01.858 clat percentiles (usec): 00:42:01.858 | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 253], 00:42:01.858 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 330], 00:42:01.858 | 70.00th=[ 343], 80.00th=[ 355], 90.00th=[ 367], 95.00th=[ 375], 00:42:01.858 | 99.00th=[ 457], 99.50th=[ 478], 99.90th=[ 537], 99.95th=[ 537], 00:42:01.858 | 99.99th=[ 537] 00:42:01.858 bw ( KiB/s): min= 576, max= 7616, per=22.69%, avg=4096.00, stdev=4978.03, samples=2 00:42:01.858 iops : min= 144, max= 1904, avg=1024.00, stdev=1244.51, samples=2 00:42:01.858 lat (usec) : 250=8.70%, 500=88.10%, 750=2.24%, 1000=0.22% 00:42:01.858 lat (msec) : 2=0.11%, 4=0.06%, 50=0.56% 00:42:01.858 cpu : usr=1.18%, sys=2.25%, ctx=1784, majf=0, minf=1 00:42:01.858 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:01.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.858 issued rwts: total=758,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.858 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:01.858 00:42:01.858 Run status group 0 (all jobs): 00:42:01.858 READ: bw=13.9MiB/s (14.5MB/s), 283KiB/s-5794KiB/s (290kB/s-5933kB/s), io=14.2MiB (14.8MB), run=1001-1021msec 00:42:01.858 WRITE: bw=17.6MiB/s (18.5MB/s), 2044KiB/s-6138KiB/s (2093kB/s-6285kB/s), io=18.0MiB (18.9MB), run=1001-1021msec 00:42:01.858 00:42:01.858 Disk stats (read/write): 00:42:01.858 nvme0n1: ios=1073/1480, merge=0/0, ticks=614/410, in_queue=1024, util=85.67% 00:42:01.858 nvme0n2: ios=1121/1536, merge=0/0, ticks=827/369, in_queue=1196, util=89.93% 00:42:01.858 nvme0n3: ios=106/512, merge=0/0, ticks=780/120, in_queue=900, util=95.20% 00:42:01.858 nvme0n4: ios=809/1024, merge=0/0, ticks=861/295, in_queue=1156, util=94.43% 00:42:01.858 01:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:42:01.858 [global] 00:42:01.858 thread=1 00:42:01.858 invalidate=1 00:42:01.858 rw=randwrite 00:42:01.858 time_based=1 00:42:01.858 runtime=1 00:42:01.858 ioengine=libaio 00:42:01.858 direct=1 00:42:01.858 bs=4096 00:42:01.858 iodepth=1 00:42:01.858 norandommap=0 00:42:01.858 numjobs=1 00:42:01.858 00:42:01.858 verify_dump=1 00:42:01.858 verify_backlog=512 00:42:01.858 verify_state_save=0 00:42:01.858 do_verify=1 00:42:01.858 verify=crc32c-intel 00:42:01.858 [job0] 00:42:01.858 filename=/dev/nvme0n1 00:42:01.858 [job1] 00:42:01.858 filename=/dev/nvme0n2 00:42:01.858 [job2] 00:42:01.858 filename=/dev/nvme0n3 00:42:01.858 [job3] 00:42:01.858 filename=/dev/nvme0n4 00:42:01.858 Could not set queue depth (nvme0n1) 00:42:01.858 Could not set queue depth (nvme0n2) 00:42:01.858 Could not set queue depth (nvme0n3) 00:42:01.858 Could not set queue depth (nvme0n4) 00:42:01.858 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:01.858 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:01.858 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:01.858 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:01.858 fio-3.35 00:42:01.858 Starting 4 threads 00:42:03.234 00:42:03.234 job0: (groupid=0, jobs=1): err= 0: pid=3191243: Fri Dec 6 01:19:35 2024 00:42:03.234 read: IOPS=1065, BW=4264KiB/s (4366kB/s)(4268KiB/1001msec) 00:42:03.234 slat (nsec): min=4587, max=63874, avg=14979.42, stdev=8077.82 00:42:03.234 clat (usec): min=238, max=41467, avg=455.04, stdev=1528.85 00:42:03.234 lat (usec): min=243, max=41485, avg=470.02, stdev=1529.40 00:42:03.234 clat percentiles (usec): 00:42:03.234 | 1.00th=[ 265], 5.00th=[ 297], 10.00th=[ 318], 20.00th=[ 343], 00:42:03.234 | 30.00th=[ 355], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 396], 00:42:03.234 | 70.00th=[ 416], 80.00th=[ 437], 90.00th=[ 478], 95.00th=[ 502], 00:42:03.234 | 99.00th=[ 578], 99.50th=[ 586], 99.90th=[28705], 99.95th=[41681], 00:42:03.234 | 99.99th=[41681] 00:42:03.234 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:42:03.234 slat (nsec): min=5579, max=71001, avg=17166.38, stdev=10602.14 00:42:03.234 clat (usec): min=168, max=579, avg=300.35, stdev=66.64 00:42:03.234 lat (usec): min=176, max=593, avg=317.52, stdev=69.20 00:42:03.234 clat percentiles (usec): 00:42:03.234 | 1.00th=[ 176], 5.00th=[ 206], 10.00th=[ 217], 20.00th=[ 245], 00:42:03.234 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 306], 00:42:03.234 | 70.00th=[ 326], 80.00th=[ 355], 90.00th=[ 392], 95.00th=[ 424], 00:42:03.234 | 99.00th=[ 486], 99.50th=[ 506], 99.90th=[ 570], 99.95th=[ 578], 00:42:03.234 | 99.99th=[ 578] 00:42:03.234 bw ( KiB/s): min= 4768, max= 4768, per=29.89%, avg=4768.00, stdev= 0.00, samples=1 00:42:03.234 iops : min= 1192, max= 1192, avg=1192.00, stdev= 0.00, samples=1 00:42:03.234 lat (usec) : 250=12.72%, 500=84.75%, 750=2.46% 00:42:03.234 lat (msec) : 50=0.08% 00:42:03.234 cpu : usr=2.40%, sys=4.10%, ctx=2605, majf=0, minf=1 00:42:03.234 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:03.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.234 issued rwts: total=1067,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.234 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:03.234 job1: (groupid=0, jobs=1): err= 0: pid=3191247: Fri Dec 6 01:19:35 2024 00:42:03.234 read: IOPS=376, BW=1507KiB/s (1543kB/s)(1548KiB/1027msec) 00:42:03.234 slat (nsec): min=7549, max=37892, avg=17237.60, stdev=3246.09 00:42:03.234 clat (usec): min=303, max=41021, avg=2243.73, stdev=8557.48 00:42:03.234 lat (usec): min=310, max=41033, avg=2260.96, stdev=8556.87 00:42:03.234 clat percentiles (usec): 00:42:03.234 | 1.00th=[ 314], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 343], 00:42:03.234 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 359], 00:42:03.234 | 70.00th=[ 363], 80.00th=[ 371], 90.00th=[ 396], 95.00th=[ 445], 00:42:03.234 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:03.234 | 99.99th=[41157] 00:42:03.234 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:42:03.234 slat (nsec): min=8198, max=62608, avg=19985.37, stdev=7932.26 00:42:03.234 clat (usec): min=207, max=610, avg=265.35, stdev=30.03 00:42:03.234 lat (usec): min=216, max=632, avg=285.34, stdev=32.39 00:42:03.234 clat percentiles (usec): 00:42:03.234 | 1.00th=[ 215], 5.00th=[ 229], 10.00th=[ 239], 20.00th=[ 247], 00:42:03.234 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:42:03.234 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 302], 00:42:03.234 | 99.00th=[ 375], 99.50th=[ 416], 99.90th=[ 611], 99.95th=[ 611], 00:42:03.234 | 99.99th=[ 611] 00:42:03.234 bw ( KiB/s): min= 4096, max= 4096, per=25.68%, avg=4096.00, stdev= 0.00, samples=1 00:42:03.234 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:03.234 lat (usec) : 250=14.46%, 500=83.43%, 750=0.11% 00:42:03.234 lat (msec) : 50=2.00% 00:42:03.234 cpu : usr=1.56%, sys=1.85%, ctx=901, majf=0, minf=1 00:42:03.234 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:03.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.234 issued rwts: total=387,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.234 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:03.234 job2: (groupid=0, jobs=1): err= 0: pid=3191263: Fri Dec 6 01:19:35 2024 00:42:03.235 read: IOPS=20, BW=82.2KiB/s (84.2kB/s)(84.0KiB/1022msec) 00:42:03.235 slat (nsec): min=10158, max=36532, avg=18768.52, stdev=7438.99 00:42:03.235 clat (usec): min=40935, max=41021, avg=40974.79, stdev=20.36 00:42:03.235 lat (usec): min=40965, max=41031, avg=40993.56, stdev=14.72 00:42:03.235 clat percentiles (usec): 00:42:03.235 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:03.235 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:03.235 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:03.235 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:03.235 | 99.99th=[41157] 00:42:03.235 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:42:03.235 slat (nsec): min=7838, max=79358, avg=19716.99, stdev=9629.78 00:42:03.235 clat (usec): min=212, max=1219, avg=288.56, stdev=99.29 00:42:03.235 lat (usec): min=221, max=1241, avg=308.28, stdev=100.58 00:42:03.235 clat percentiles (usec): 00:42:03.235 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 241], 00:42:03.235 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 265], 00:42:03.235 | 70.00th=[ 277], 80.00th=[ 302], 90.00th=[ 379], 95.00th=[ 457], 00:42:03.235 | 99.00th=[ 668], 99.50th=[ 963], 99.90th=[ 1221], 99.95th=[ 1221], 00:42:03.235 | 99.99th=[ 1221] 00:42:03.235 bw ( KiB/s): min= 4096, max= 4096, per=25.68%, avg=4096.00, stdev= 0.00, samples=1 00:42:03.235 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:03.235 lat (usec) : 250=33.40%, 500=59.66%, 750=2.06%, 1000=0.56% 00:42:03.235 lat (msec) : 2=0.38%, 50=3.94% 00:42:03.235 cpu : usr=0.29%, sys=1.08%, ctx=535, majf=0, minf=1 00:42:03.235 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:03.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.235 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.235 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:03.235 job3: (groupid=0, jobs=1): err= 0: pid=3191269: Fri Dec 6 01:19:35 2024 00:42:03.235 read: IOPS=1213, BW=4855KiB/s (4972kB/s)(4860KiB/1001msec) 00:42:03.235 slat (nsec): min=4725, max=69442, avg=17884.87, stdev=9634.18 00:42:03.235 clat (usec): min=255, max=595, avg=398.30, stdev=55.17 00:42:03.235 lat (usec): min=267, max=612, avg=416.19, stdev=57.92 00:42:03.235 clat percentiles (usec): 00:42:03.235 | 1.00th=[ 277], 5.00th=[ 310], 10.00th=[ 343], 20.00th=[ 355], 00:42:03.235 | 30.00th=[ 375], 40.00th=[ 379], 50.00th=[ 388], 60.00th=[ 400], 00:42:03.235 | 70.00th=[ 424], 80.00th=[ 441], 90.00th=[ 474], 95.00th=[ 502], 00:42:03.235 | 99.00th=[ 562], 99.50th=[ 562], 99.90th=[ 570], 99.95th=[ 594], 00:42:03.235 | 99.99th=[ 594] 00:42:03.235 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:42:03.235 slat (nsec): min=6004, max=80409, avg=18585.90, stdev=10965.41 00:42:03.235 clat (usec): min=181, max=569, avg=294.61, stdev=63.83 00:42:03.235 lat (usec): min=188, max=589, avg=313.19, stdev=65.20 00:42:03.235 clat percentiles (usec): 00:42:03.235 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 212], 20.00th=[ 241], 00:42:03.235 | 30.00th=[ 258], 40.00th=[ 273], 50.00th=[ 289], 60.00th=[ 302], 00:42:03.235 | 70.00th=[ 330], 80.00th=[ 355], 90.00th=[ 383], 95.00th=[ 400], 00:42:03.235 | 99.00th=[ 449], 99.50th=[ 469], 99.90th=[ 562], 99.95th=[ 570], 00:42:03.235 | 99.99th=[ 570] 00:42:03.235 bw ( KiB/s): min= 6528, max= 6528, per=40.92%, avg=6528.00, stdev= 0.00, samples=1 00:42:03.235 iops : min= 1632, max= 1632, avg=1632.00, stdev= 0.00, samples=1 00:42:03.235 lat (usec) : 250=15.09%, 500=82.48%, 750=2.44% 00:42:03.235 cpu : usr=2.90%, sys=4.80%, ctx=2752, majf=0, minf=1 00:42:03.235 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:03.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.235 issued rwts: total=1215,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.235 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:03.235 00:42:03.235 Run status group 0 (all jobs): 00:42:03.235 READ: bw=10.2MiB/s (10.7MB/s), 82.2KiB/s-4855KiB/s (84.2kB/s-4972kB/s), io=10.5MiB (11.0MB), run=1001-1027msec 00:42:03.235 WRITE: bw=15.6MiB/s (16.3MB/s), 1994KiB/s-6138KiB/s (2042kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1027msec 00:42:03.235 00:42:03.235 Disk stats (read/write): 00:42:03.235 nvme0n1: ios=1065/1077, merge=0/0, ticks=798/312, in_queue=1110, util=98.10% 00:42:03.235 nvme0n2: ios=421/512, merge=0/0, ticks=1268/135, in_queue=1403, util=98.17% 00:42:03.235 nvme0n3: ios=60/512, merge=0/0, ticks=1130/137, in_queue=1267, util=98.02% 00:42:03.235 nvme0n4: ios=1072/1292, merge=0/0, ticks=1321/378, in_queue=1699, util=99.16% 00:42:03.235 01:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:42:03.235 [global] 00:42:03.235 thread=1 00:42:03.235 invalidate=1 00:42:03.235 rw=write 00:42:03.235 time_based=1 00:42:03.235 runtime=1 00:42:03.235 ioengine=libaio 00:42:03.235 direct=1 00:42:03.235 bs=4096 00:42:03.235 iodepth=128 00:42:03.235 norandommap=0 00:42:03.235 numjobs=1 00:42:03.235 00:42:03.235 verify_dump=1 00:42:03.235 verify_backlog=512 00:42:03.235 verify_state_save=0 00:42:03.235 do_verify=1 00:42:03.235 verify=crc32c-intel 00:42:03.235 [job0] 00:42:03.235 filename=/dev/nvme0n1 00:42:03.235 [job1] 00:42:03.235 filename=/dev/nvme0n2 00:42:03.235 [job2] 00:42:03.235 filename=/dev/nvme0n3 00:42:03.235 [job3] 00:42:03.235 filename=/dev/nvme0n4 00:42:03.235 Could not set queue depth (nvme0n1) 00:42:03.235 Could not set queue depth (nvme0n2) 00:42:03.235 Could not set queue depth (nvme0n3) 00:42:03.235 Could not set queue depth (nvme0n4) 00:42:03.492 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:03.492 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:03.492 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:03.492 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:03.492 fio-3.35 00:42:03.492 Starting 4 threads 00:42:04.872 00:42:04.872 job0: (groupid=0, jobs=1): err= 0: pid=3191589: Fri Dec 6 01:19:37 2024 00:42:04.872 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:42:04.872 slat (usec): min=2, max=8300, avg=131.92, stdev=730.60 00:42:04.872 clat (usec): min=11490, max=27237, avg=16478.92, stdev=2700.64 00:42:04.872 lat (usec): min=11500, max=27245, avg=16610.84, stdev=2740.49 00:42:04.872 clat percentiles (usec): 00:42:04.872 | 1.00th=[12387], 5.00th=[13173], 10.00th=[14091], 20.00th=[15008], 00:42:04.872 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15664], 60.00th=[16057], 00:42:04.872 | 70.00th=[16712], 80.00th=[17433], 90.00th=[20579], 95.00th=[21890], 00:42:04.872 | 99.00th=[25822], 99.50th=[27132], 99.90th=[27132], 99.95th=[27132], 00:42:04.872 | 99.99th=[27132] 00:42:04.872 write: IOPS=4056, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1003msec); 0 zone resets 00:42:04.872 slat (usec): min=3, max=5910, avg=120.48, stdev=568.25 00:42:04.872 clat (usec): min=2873, max=28224, avg=16629.06, stdev=3069.45 00:42:04.872 lat (usec): min=2919, max=28968, avg=16749.54, stdev=3073.02 00:42:04.872 clat percentiles (usec): 00:42:04.872 | 1.00th=[ 7963], 5.00th=[12387], 10.00th=[14353], 20.00th=[14877], 00:42:04.872 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15795], 60.00th=[16319], 00:42:04.872 | 70.00th=[17433], 80.00th=[19268], 90.00th=[20841], 95.00th=[22414], 00:42:04.872 | 99.00th=[27132], 99.50th=[27395], 99.90th=[28181], 99.95th=[28181], 00:42:04.872 | 99.99th=[28181] 00:42:04.872 bw ( KiB/s): min=15152, max=16384, per=24.89%, avg=15768.00, stdev=871.16, samples=2 00:42:04.872 iops : min= 3788, max= 4096, avg=3942.00, stdev=217.79, samples=2 00:42:04.872 lat (msec) : 4=0.21%, 10=0.42%, 20=88.95%, 50=10.43% 00:42:04.872 cpu : usr=5.79%, sys=6.09%, ctx=399, majf=0, minf=1 00:42:04.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:42:04.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:04.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:04.872 issued rwts: total=3584,4069,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:04.872 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:04.872 job1: (groupid=0, jobs=1): err= 0: pid=3191596: Fri Dec 6 01:19:37 2024 00:42:04.872 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:42:04.872 slat (usec): min=3, max=12595, avg=122.49, stdev=920.49 00:42:04.872 clat (usec): min=5511, max=37298, avg=16477.55, stdev=6379.38 00:42:04.872 lat (usec): min=5541, max=38491, avg=16600.04, stdev=6432.87 00:42:04.872 clat percentiles (usec): 00:42:04.872 | 1.00th=[ 6390], 5.00th=[ 9634], 10.00th=[10814], 20.00th=[11994], 00:42:04.872 | 30.00th=[12518], 40.00th=[13304], 50.00th=[13829], 60.00th=[16188], 00:42:04.872 | 70.00th=[18220], 80.00th=[20841], 90.00th=[26084], 95.00th=[31327], 00:42:04.872 | 99.00th=[35390], 99.50th=[36439], 99.90th=[36439], 99.95th=[37487], 00:42:04.872 | 99.99th=[37487] 00:42:04.872 write: IOPS=4009, BW=15.7MiB/s (16.4MB/s)(15.7MiB/1005msec); 0 zone resets 00:42:04.872 slat (usec): min=4, max=28612, avg=127.83, stdev=915.81 00:42:04.872 clat (usec): min=2841, max=63243, avg=16977.74, stdev=10452.86 00:42:04.872 lat (usec): min=2848, max=63254, avg=17105.57, stdev=10513.04 00:42:04.872 clat percentiles (usec): 00:42:04.872 | 1.00th=[ 4948], 5.00th=[ 8586], 10.00th=[ 8586], 20.00th=[11731], 00:42:04.872 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13829], 60.00th=[14615], 00:42:04.872 | 70.00th=[15008], 80.00th=[18220], 90.00th=[35390], 95.00th=[38536], 00:42:04.872 | 99.00th=[60556], 99.50th=[63177], 99.90th=[63177], 99.95th=[63177], 00:42:04.872 | 99.99th=[63177] 00:42:04.872 bw ( KiB/s): min=10752, max=20472, per=24.65%, avg=15612.00, stdev=6873.08, samples=2 00:42:04.872 iops : min= 2688, max= 5118, avg=3903.00, stdev=1718.27, samples=2 00:42:04.872 lat (msec) : 4=0.26%, 10=11.75%, 20=69.52%, 50=17.22%, 100=1.25% 00:42:04.872 cpu : usr=4.58%, sys=6.87%, ctx=310, majf=0, minf=1 00:42:04.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:42:04.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:04.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:04.872 issued rwts: total=3584,4030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:04.872 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:04.872 job2: (groupid=0, jobs=1): err= 0: pid=3191597: Fri Dec 6 01:19:37 2024 00:42:04.872 read: IOPS=3534, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1014msec) 00:42:04.872 slat (usec): min=3, max=14204, avg=130.97, stdev=1097.31 00:42:04.872 clat (usec): min=6198, max=31983, avg=16682.56, stdev=4228.74 00:42:04.872 lat (usec): min=6215, max=32011, avg=16813.53, stdev=4322.96 00:42:04.872 clat percentiles (usec): 00:42:04.872 | 1.00th=[11469], 5.00th=[12911], 10.00th=[13435], 20.00th=[14091], 00:42:04.872 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15401], 60.00th=[15926], 00:42:04.872 | 70.00th=[16319], 80.00th=[17433], 90.00th=[25035], 95.00th=[26346], 00:42:04.872 | 99.00th=[30278], 99.50th=[31065], 99.90th=[31851], 99.95th=[31851], 00:42:04.872 | 99.99th=[32113] 00:42:04.872 write: IOPS=3808, BW=14.9MiB/s (15.6MB/s)(15.1MiB/1014msec); 0 zone resets 00:42:04.872 slat (usec): min=4, max=14303, avg=127.99, stdev=926.09 00:42:04.872 clat (usec): min=1152, max=63704, avg=17810.38, stdev=8387.21 00:42:04.872 lat (usec): min=1163, max=63715, avg=17938.37, stdev=8461.70 00:42:04.872 clat percentiles (usec): 00:42:04.872 | 1.00th=[ 6652], 5.00th=[10552], 10.00th=[10945], 20.00th=[14746], 00:42:04.872 | 30.00th=[15139], 40.00th=[15533], 50.00th=[16319], 60.00th=[16581], 00:42:04.872 | 70.00th=[17171], 80.00th=[17695], 90.00th=[23200], 95.00th=[32113], 00:42:04.872 | 99.00th=[57934], 99.50th=[61604], 99.90th=[63701], 99.95th=[63701], 00:42:04.872 | 99.99th=[63701] 00:42:04.872 bw ( KiB/s): min=13496, max=16384, per=23.59%, avg=14940.00, stdev=2042.12, samples=2 00:42:04.872 iops : min= 3374, max= 4096, avg=3735.00, stdev=510.53, samples=2 00:42:04.872 lat (msec) : 2=0.03%, 10=2.28%, 20=81.21%, 50=15.03%, 100=1.45% 00:42:04.872 cpu : usr=4.44%, sys=7.70%, ctx=249, majf=0, minf=1 00:42:04.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:42:04.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:04.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:04.872 issued rwts: total=3584,3862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:04.872 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:04.872 job3: (groupid=0, jobs=1): err= 0: pid=3191598: Fri Dec 6 01:19:37 2024 00:42:04.872 read: IOPS=3770, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1002msec) 00:42:04.872 slat (usec): min=3, max=12461, avg=123.98, stdev=701.39 00:42:04.872 clat (usec): min=1361, max=27634, avg=16243.84, stdev=2579.19 00:42:04.872 lat (usec): min=1376, max=27639, avg=16367.83, stdev=2591.52 00:42:04.872 clat percentiles (usec): 00:42:04.872 | 1.00th=[ 7373], 5.00th=[12780], 10.00th=[13304], 20.00th=[15139], 00:42:04.872 | 30.00th=[15795], 40.00th=[16188], 50.00th=[16450], 60.00th=[16712], 00:42:04.872 | 70.00th=[16909], 80.00th=[17433], 90.00th=[18744], 95.00th=[20055], 00:42:04.872 | 99.00th=[22676], 99.50th=[26346], 99.90th=[27657], 99.95th=[27657], 00:42:04.872 | 99.99th=[27657] 00:42:04.872 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:42:04.872 slat (usec): min=4, max=11876, avg=109.43, stdev=629.55 00:42:04.872 clat (usec): min=818, max=43651, avg=15894.10, stdev=2837.18 00:42:04.872 lat (usec): min=825, max=43660, avg=16003.52, stdev=2866.18 00:42:04.872 clat percentiles (usec): 00:42:04.872 | 1.00th=[ 7767], 5.00th=[ 9372], 10.00th=[13042], 20.00th=[14091], 00:42:04.872 | 30.00th=[14877], 40.00th=[15795], 50.00th=[16188], 60.00th=[16909], 00:42:04.872 | 70.00th=[17433], 80.00th=[17957], 90.00th=[18482], 95.00th=[19268], 00:42:04.872 | 99.00th=[21890], 99.50th=[21890], 99.90th=[33817], 99.95th=[33817], 00:42:04.872 | 99.99th=[43779] 00:42:04.872 bw ( KiB/s): min=16384, max=16384, per=25.87%, avg=16384.00, stdev= 0.00, samples=2 00:42:04.872 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:42:04.872 lat (usec) : 1000=0.04% 00:42:04.872 lat (msec) : 2=0.08%, 10=4.15%, 20=91.54%, 50=4.19% 00:42:04.872 cpu : usr=4.50%, sys=7.39%, ctx=326, majf=0, minf=2 00:42:04.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:42:04.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:04.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:04.873 issued rwts: total=3778,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:04.873 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:04.873 00:42:04.873 Run status group 0 (all jobs): 00:42:04.873 READ: bw=56.0MiB/s (58.7MB/s), 13.8MiB/s-14.7MiB/s (14.5MB/s-15.4MB/s), io=56.8MiB (59.5MB), run=1002-1014msec 00:42:04.873 WRITE: bw=61.9MiB/s (64.9MB/s), 14.9MiB/s-16.0MiB/s (15.6MB/s-16.7MB/s), io=62.7MiB (65.8MB), run=1002-1014msec 00:42:04.873 00:42:04.873 Disk stats (read/write): 00:42:04.873 nvme0n1: ios=3154/3584, merge=0/0, ticks=16585/17249, in_queue=33834, util=86.67% 00:42:04.873 nvme0n2: ios=3460/3584, merge=0/0, ticks=45604/43506, in_queue=89110, util=98.07% 00:42:04.873 nvme0n3: ios=2954/3072, merge=0/0, ticks=48325/55127, in_queue=103452, util=98.85% 00:42:04.873 nvme0n4: ios=3115/3535, merge=0/0, ticks=18495/23685, in_queue=42180, util=90.72% 00:42:04.873 01:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:42:04.873 [global] 00:42:04.873 thread=1 00:42:04.873 invalidate=1 00:42:04.873 rw=randwrite 00:42:04.873 time_based=1 00:42:04.873 runtime=1 00:42:04.873 ioengine=libaio 00:42:04.873 direct=1 00:42:04.873 bs=4096 00:42:04.873 iodepth=128 00:42:04.873 norandommap=0 00:42:04.873 numjobs=1 00:42:04.873 00:42:04.873 verify_dump=1 00:42:04.873 verify_backlog=512 00:42:04.873 verify_state_save=0 00:42:04.873 do_verify=1 00:42:04.873 verify=crc32c-intel 00:42:04.873 [job0] 00:42:04.873 filename=/dev/nvme0n1 00:42:04.873 [job1] 00:42:04.873 filename=/dev/nvme0n2 00:42:04.873 [job2] 00:42:04.873 filename=/dev/nvme0n3 00:42:04.873 [job3] 00:42:04.873 filename=/dev/nvme0n4 00:42:04.873 Could not set queue depth (nvme0n1) 00:42:04.873 Could not set queue depth (nvme0n2) 00:42:04.873 Could not set queue depth (nvme0n3) 00:42:04.873 Could not set queue depth (nvme0n4) 00:42:04.873 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:04.873 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:04.873 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:04.873 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:04.873 fio-3.35 00:42:04.873 Starting 4 threads 00:42:06.278 00:42:06.278 job0: (groupid=0, jobs=1): err= 0: pid=3191818: Fri Dec 6 01:19:38 2024 00:42:06.278 read: IOPS=2839, BW=11.1MiB/s (11.6MB/s)(11.2MiB/1008msec) 00:42:06.278 slat (usec): min=2, max=13077, avg=185.62, stdev=1152.86 00:42:06.278 clat (usec): min=1545, max=42622, avg=22420.84, stdev=7086.91 00:42:06.278 lat (usec): min=6418, max=42626, avg=22606.46, stdev=7146.92 00:42:06.278 clat percentiles (usec): 00:42:06.278 | 1.00th=[10683], 5.00th=[12649], 10.00th=[13435], 20.00th=[15926], 00:42:06.278 | 30.00th=[16909], 40.00th=[19268], 50.00th=[22676], 60.00th=[24773], 00:42:06.278 | 70.00th=[26608], 80.00th=[28967], 90.00th=[32637], 95.00th=[33817], 00:42:06.278 | 99.00th=[37487], 99.50th=[38011], 99.90th=[42730], 99.95th=[42730], 00:42:06.278 | 99.99th=[42730] 00:42:06.278 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:42:06.278 slat (usec): min=3, max=16726, avg=146.21, stdev=1036.16 00:42:06.278 clat (usec): min=515, max=51838, avg=20690.59, stdev=7107.43 00:42:06.278 lat (usec): min=536, max=51858, avg=20836.81, stdev=7209.51 00:42:06.278 clat percentiles (usec): 00:42:06.278 | 1.00th=[ 1926], 5.00th=[10028], 10.00th=[15139], 20.00th=[15401], 00:42:06.278 | 30.00th=[15795], 40.00th=[17695], 50.00th=[19530], 60.00th=[21890], 00:42:06.279 | 70.00th=[25560], 80.00th=[27395], 90.00th=[30278], 95.00th=[32113], 00:42:06.279 | 99.00th=[36439], 99.50th=[40109], 99.90th=[41157], 99.95th=[41681], 00:42:06.279 | 99.99th=[51643] 00:42:06.279 bw ( KiB/s): min=12288, max=12288, per=23.19%, avg=12288.00, stdev= 0.00, samples=2 00:42:06.279 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:42:06.279 lat (usec) : 750=0.05% 00:42:06.279 lat (msec) : 2=0.56%, 4=0.76%, 10=1.40%, 20=45.92%, 50=51.30% 00:42:06.279 lat (msec) : 100=0.02% 00:42:06.279 cpu : usr=1.29%, sys=2.58%, ctx=208, majf=0, minf=1 00:42:06.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:42:06.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:06.279 issued rwts: total=2862,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:06.279 job1: (groupid=0, jobs=1): err= 0: pid=3191819: Fri Dec 6 01:19:38 2024 00:42:06.279 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:42:06.279 slat (usec): min=2, max=15013, avg=134.64, stdev=792.46 00:42:06.279 clat (usec): min=9902, max=40773, avg=17093.47, stdev=5518.56 00:42:06.279 lat (usec): min=9909, max=40777, avg=17228.10, stdev=5558.75 00:42:06.279 clat percentiles (usec): 00:42:06.279 | 1.00th=[11600], 5.00th=[12387], 10.00th=[13173], 20.00th=[13829], 00:42:06.279 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14877], 60.00th=[15401], 00:42:06.279 | 70.00th=[16581], 80.00th=[20317], 90.00th=[25297], 95.00th=[30802], 00:42:06.279 | 99.00th=[36439], 99.50th=[37487], 99.90th=[37487], 99.95th=[40633], 00:42:06.279 | 99.99th=[40633] 00:42:06.279 write: IOPS=3166, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1004msec); 0 zone resets 00:42:06.279 slat (usec): min=3, max=17540, avg=179.62, stdev=1060.86 00:42:06.279 clat (usec): min=469, max=132135, avg=22436.45, stdev=19619.47 00:42:06.279 lat (msec): min=4, max=132, avg=22.62, stdev=19.74 00:42:06.279 clat percentiles (msec): 00:42:06.279 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 14], 20.00th=[ 15], 00:42:06.279 | 30.00th=[ 15], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 16], 00:42:06.279 | 70.00th=[ 20], 80.00th=[ 30], 90.00th=[ 33], 95.00th=[ 53], 00:42:06.279 | 99.00th=[ 123], 99.50th=[ 127], 99.90th=[ 133], 99.95th=[ 133], 00:42:06.279 | 99.99th=[ 133] 00:42:06.279 bw ( KiB/s): min= 9792, max=14824, per=23.23%, avg=12308.00, stdev=3558.16, samples=2 00:42:06.279 iops : min= 2448, max= 3706, avg=3077.00, stdev=889.54, samples=2 00:42:06.279 lat (usec) : 500=0.02% 00:42:06.279 lat (msec) : 10=1.47%, 20=73.20%, 50=22.67%, 100=1.38%, 250=1.26% 00:42:06.279 cpu : usr=2.29%, sys=3.59%, ctx=338, majf=0, minf=1 00:42:06.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:42:06.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:06.279 issued rwts: total=3072,3179,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:06.279 job2: (groupid=0, jobs=1): err= 0: pid=3191824: Fri Dec 6 01:19:38 2024 00:42:06.279 read: IOPS=3743, BW=14.6MiB/s (15.3MB/s)(14.8MiB/1013msec) 00:42:06.279 slat (usec): min=3, max=15352, avg=135.77, stdev=1105.23 00:42:06.279 clat (usec): min=4824, max=32211, avg=17393.15, stdev=4145.60 00:42:06.279 lat (usec): min=5089, max=39989, avg=17528.91, stdev=4258.18 00:42:06.279 clat percentiles (usec): 00:42:06.279 | 1.00th=[10945], 5.00th=[12518], 10.00th=[13566], 20.00th=[14353], 00:42:06.279 | 30.00th=[15008], 40.00th=[15795], 50.00th=[16188], 60.00th=[16712], 00:42:06.279 | 70.00th=[17957], 80.00th=[20579], 90.00th=[24511], 95.00th=[26346], 00:42:06.279 | 99.00th=[29492], 99.50th=[30278], 99.90th=[32113], 99.95th=[32113], 00:42:06.279 | 99.99th=[32113] 00:42:06.279 write: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec); 0 zone resets 00:42:06.279 slat (usec): min=4, max=14436, avg=114.53, stdev=865.75 00:42:06.279 clat (usec): min=1185, max=32220, avg=15268.58, stdev=3416.56 00:42:06.279 lat (usec): min=1214, max=32229, avg=15383.10, stdev=3493.98 00:42:06.279 clat percentiles (usec): 00:42:06.279 | 1.00th=[ 4293], 5.00th=[ 9110], 10.00th=[11207], 20.00th=[13173], 00:42:06.279 | 30.00th=[14615], 40.00th=[15401], 50.00th=[15795], 60.00th=[16188], 00:42:06.279 | 70.00th=[16319], 80.00th=[16909], 90.00th=[17695], 95.00th=[22152], 00:42:06.279 | 99.00th=[23725], 99.50th=[27395], 99.90th=[30016], 99.95th=[32113], 00:42:06.279 | 99.99th=[32113] 00:42:06.279 bw ( KiB/s): min=16384, max=16384, per=30.92%, avg=16384.00, stdev= 0.00, samples=2 00:42:06.279 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:42:06.279 lat (msec) : 2=0.11%, 4=0.28%, 10=3.40%, 20=82.81%, 50=13.40% 00:42:06.279 cpu : usr=3.46%, sys=4.55%, ctx=306, majf=0, minf=2 00:42:06.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:42:06.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:06.279 issued rwts: total=3792,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:06.279 job3: (groupid=0, jobs=1): err= 0: pid=3191825: Fri Dec 6 01:19:38 2024 00:42:06.279 read: IOPS=2584, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1007msec) 00:42:06.279 slat (usec): min=2, max=12549, avg=201.08, stdev=1170.76 00:42:06.279 clat (usec): min=1280, max=53718, avg=23899.67, stdev=6943.49 00:42:06.279 lat (usec): min=6860, max=53732, avg=24100.75, stdev=7002.85 00:42:06.279 clat percentiles (usec): 00:42:06.279 | 1.00th=[11207], 5.00th=[13566], 10.00th=[15401], 20.00th=[16581], 00:42:06.279 | 30.00th=[19268], 40.00th=[21890], 50.00th=[24511], 60.00th=[26608], 00:42:06.279 | 70.00th=[27657], 80.00th=[29492], 90.00th=[32375], 95.00th=[33424], 00:42:06.279 | 99.00th=[42206], 99.50th=[44827], 99.90th=[52167], 99.95th=[52167], 00:42:06.279 | 99.99th=[53740] 00:42:06.279 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:42:06.279 slat (usec): min=3, max=10959, avg=152.64, stdev=862.72 00:42:06.279 clat (usec): min=10763, max=49782, avg=21116.85, stdev=6907.01 00:42:06.279 lat (usec): min=10769, max=49787, avg=21269.49, stdev=6973.51 00:42:06.279 clat percentiles (usec): 00:42:06.279 | 1.00th=[12518], 5.00th=[14615], 10.00th=[15401], 20.00th=[15533], 00:42:06.279 | 30.00th=[15795], 40.00th=[16712], 50.00th=[17433], 60.00th=[22152], 00:42:06.279 | 70.00th=[25035], 80.00th=[27395], 90.00th=[29492], 95.00th=[33162], 00:42:06.279 | 99.00th=[43779], 99.50th=[45351], 99.90th=[49546], 99.95th=[49546], 00:42:06.279 | 99.99th=[49546] 00:42:06.279 bw ( KiB/s): min=11608, max=12288, per=22.55%, avg=11948.00, stdev=480.83, samples=2 00:42:06.279 iops : min= 2902, max= 3072, avg=2987.00, stdev=120.21, samples=2 00:42:06.279 lat (msec) : 2=0.02%, 10=0.41%, 20=45.06%, 50=54.36%, 100=0.16% 00:42:06.279 cpu : usr=1.59%, sys=2.19%, ctx=294, majf=0, minf=1 00:42:06.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:42:06.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:06.279 issued rwts: total=2603,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:06.279 00:42:06.279 Run status group 0 (all jobs): 00:42:06.279 READ: bw=47.5MiB/s (49.9MB/s), 10.1MiB/s-14.6MiB/s (10.6MB/s-15.3MB/s), io=48.2MiB (50.5MB), run=1004-1013msec 00:42:06.279 WRITE: bw=51.7MiB/s (54.3MB/s), 11.9MiB/s-15.8MiB/s (12.5MB/s-16.6MB/s), io=52.4MiB (55.0MB), run=1004-1013msec 00:42:06.279 00:42:06.279 Disk stats (read/write): 00:42:06.279 nvme0n1: ios=2610/2674, merge=0/0, ticks=26504/30630, in_queue=57134, util=94.09% 00:42:06.279 nvme0n2: ios=2410/2560, merge=0/0, ticks=13966/21435, in_queue=35401, util=93.20% 00:42:06.279 nvme0n3: ios=3118/3504, merge=0/0, ticks=53487/52019, in_queue=105506, util=97.08% 00:42:06.279 nvme0n4: ios=2418/2560, merge=0/0, ticks=21610/15695, in_queue=37305, util=96.22% 00:42:06.279 01:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:42:06.279 01:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3191956 00:42:06.279 01:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:42:06.279 01:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:42:06.279 [global] 00:42:06.279 thread=1 00:42:06.279 invalidate=1 00:42:06.279 rw=read 00:42:06.279 time_based=1 00:42:06.279 runtime=10 00:42:06.279 ioengine=libaio 00:42:06.279 direct=1 00:42:06.279 bs=4096 00:42:06.279 iodepth=1 00:42:06.279 norandommap=1 00:42:06.279 numjobs=1 00:42:06.279 00:42:06.279 [job0] 00:42:06.279 filename=/dev/nvme0n1 00:42:06.279 [job1] 00:42:06.279 filename=/dev/nvme0n2 00:42:06.279 [job2] 00:42:06.279 filename=/dev/nvme0n3 00:42:06.279 [job3] 00:42:06.279 filename=/dev/nvme0n4 00:42:06.279 Could not set queue depth (nvme0n1) 00:42:06.279 Could not set queue depth (nvme0n2) 00:42:06.279 Could not set queue depth (nvme0n3) 00:42:06.279 Could not set queue depth (nvme0n4) 00:42:06.279 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:06.279 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:06.279 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:06.279 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:06.279 fio-3.35 00:42:06.279 Starting 4 threads 00:42:09.561 01:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:42:09.561 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=25223168, buflen=4096 00:42:09.561 fio: pid=3192054, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:09.561 01:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:42:09.561 01:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:09.561 01:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:42:09.819 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=26820608, buflen=4096 00:42:09.819 fio: pid=3192051, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:10.077 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=17498112, buflen=4096 00:42:10.077 fio: pid=3192049, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:10.077 01:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:10.077 01:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:42:10.336 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12693504, buflen=4096 00:42:10.336 fio: pid=3192050, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:10.336 01:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:10.336 01:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:42:10.336 00:42:10.336 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3192049: Fri Dec 6 01:19:42 2024 00:42:10.336 read: IOPS=1220, BW=4881KiB/s (4998kB/s)(16.7MiB/3501msec) 00:42:10.336 slat (usec): min=4, max=14920, avg=11.91, stdev=228.17 00:42:10.336 clat (usec): min=229, max=41288, avg=799.61, stdev=4416.56 00:42:10.336 lat (usec): min=236, max=41301, avg=811.52, stdev=4422.86 00:42:10.336 clat percentiles (usec): 00:42:10.336 | 1.00th=[ 262], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 281], 00:42:10.336 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 306], 00:42:10.336 | 70.00th=[ 318], 80.00th=[ 334], 90.00th=[ 388], 95.00th=[ 441], 00:42:10.336 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:10.336 | 99.99th=[41157] 00:42:10.336 bw ( KiB/s): min= 472, max=13072, per=23.48%, avg=4869.33, stdev=5282.90, samples=6 00:42:10.336 iops : min= 118, max= 3268, avg=1217.33, stdev=1320.73, samples=6 00:42:10.336 lat (usec) : 250=0.23%, 500=97.64%, 750=0.84%, 1000=0.05% 00:42:10.336 lat (msec) : 2=0.02%, 50=1.19% 00:42:10.336 cpu : usr=0.57%, sys=1.57%, ctx=4275, majf=0, minf=2 00:42:10.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:10.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:10.336 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:10.336 issued rwts: total=4273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:10.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:10.336 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3192050: Fri Dec 6 01:19:42 2024 00:42:10.336 read: IOPS=800, BW=3201KiB/s (3277kB/s)(12.1MiB/3873msec) 00:42:10.336 slat (usec): min=4, max=14883, avg=24.98, stdev=352.85 00:42:10.336 clat (usec): min=223, max=42531, avg=1213.59, stdev=5256.04 00:42:10.336 lat (usec): min=234, max=55979, avg=1238.57, stdev=5335.11 00:42:10.336 clat percentiles (usec): 00:42:10.336 | 1.00th=[ 310], 5.00th=[ 383], 10.00th=[ 404], 20.00th=[ 437], 00:42:10.336 | 30.00th=[ 461], 40.00th=[ 486], 50.00th=[ 502], 60.00th=[ 523], 00:42:10.336 | 70.00th=[ 553], 80.00th=[ 603], 90.00th=[ 693], 95.00th=[ 791], 00:42:10.336 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:42:10.336 | 99.99th=[42730] 00:42:10.336 bw ( KiB/s): min= 106, max= 8056, per=17.03%, avg=3531.71, stdev=3703.09, samples=7 00:42:10.336 iops : min= 26, max= 2014, avg=882.86, stdev=925.85, samples=7 00:42:10.336 lat (usec) : 250=0.29%, 500=48.55%, 750=44.29%, 1000=5.13% 00:42:10.336 lat (msec) : 50=1.71% 00:42:10.336 cpu : usr=0.36%, sys=1.63%, ctx=3105, majf=0, minf=2 00:42:10.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:10.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:10.336 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:10.336 issued rwts: total=3100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:10.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:10.336 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3192051: Fri Dec 6 01:19:42 2024 00:42:10.336 read: IOPS=2026, BW=8106KiB/s (8301kB/s)(25.6MiB/3231msec) 00:42:10.336 slat (usec): min=5, max=15635, avg=16.64, stdev=241.34 00:42:10.336 clat (usec): min=224, max=41481, avg=470.37, stdev=1015.58 00:42:10.336 lat (usec): min=234, max=41495, avg=487.00, stdev=1043.53 00:42:10.336 clat percentiles (usec): 00:42:10.336 | 1.00th=[ 265], 5.00th=[ 293], 10.00th=[ 326], 20.00th=[ 359], 00:42:10.336 | 30.00th=[ 375], 40.00th=[ 396], 50.00th=[ 424], 60.00th=[ 461], 00:42:10.336 | 70.00th=[ 490], 80.00th=[ 519], 90.00th=[ 586], 95.00th=[ 644], 00:42:10.336 | 99.00th=[ 824], 99.50th=[ 881], 99.90th=[ 1418], 99.95th=[41157], 00:42:10.336 | 99.99th=[41681] 00:42:10.336 bw ( KiB/s): min= 7352, max= 9712, per=41.79%, avg=8666.67, stdev=1057.38, samples=6 00:42:10.336 iops : min= 1838, max= 2428, avg=2166.67, stdev=264.34, samples=6 00:42:10.336 lat (usec) : 250=0.34%, 500=73.14%, 750=24.52%, 1000=1.68% 00:42:10.336 lat (msec) : 2=0.23%, 10=0.02%, 50=0.06% 00:42:10.336 cpu : usr=1.27%, sys=4.15%, ctx=6554, majf=0, minf=1 00:42:10.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:10.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:10.336 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:10.336 issued rwts: total=6549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:10.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:10.337 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3192054: Fri Dec 6 01:19:42 2024 00:42:10.337 read: IOPS=2117, BW=8468KiB/s (8671kB/s)(24.1MiB/2909msec) 00:42:10.337 slat (nsec): min=5643, max=78310, avg=11954.78, stdev=7260.82 00:42:10.337 clat (usec): min=220, max=41545, avg=455.80, stdev=1276.19 00:42:10.337 lat (usec): min=226, max=41557, avg=467.76, stdev=1276.39 00:42:10.337 clat percentiles (usec): 00:42:10.337 | 1.00th=[ 245], 5.00th=[ 302], 10.00th=[ 318], 20.00th=[ 338], 00:42:10.337 | 30.00th=[ 355], 40.00th=[ 371], 50.00th=[ 383], 60.00th=[ 408], 00:42:10.337 | 70.00th=[ 457], 80.00th=[ 498], 90.00th=[ 537], 95.00th=[ 619], 00:42:10.337 | 99.00th=[ 799], 99.50th=[ 848], 99.90th=[ 1139], 99.95th=[41157], 00:42:10.337 | 99.99th=[41681] 00:42:10.337 bw ( KiB/s): min= 5704, max=10552, per=40.62%, avg=8422.40, stdev=1882.55, samples=5 00:42:10.337 iops : min= 1426, max= 2638, avg=2105.60, stdev=470.64, samples=5 00:42:10.337 lat (usec) : 250=1.49%, 500=79.07%, 750=17.89%, 1000=1.40% 00:42:10.337 lat (msec) : 2=0.03%, 50=0.10% 00:42:10.337 cpu : usr=1.58%, sys=3.99%, ctx=6161, majf=0, minf=1 00:42:10.337 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:10.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:10.337 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:10.337 issued rwts: total=6159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:10.337 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:10.337 00:42:10.337 Run status group 0 (all jobs): 00:42:10.337 READ: bw=20.2MiB/s (21.2MB/s), 3201KiB/s-8468KiB/s (3277kB/s-8671kB/s), io=78.4MiB (82.2MB), run=2909-3873msec 00:42:10.337 00:42:10.337 Disk stats (read/write): 00:42:10.337 nvme0n1: ios=4269/0, merge=0/0, ticks=3242/0, in_queue=3242, util=95.57% 00:42:10.337 nvme0n2: ios=3134/0, merge=0/0, ticks=4176/0, in_queue=4176, util=99.25% 00:42:10.337 nvme0n3: ios=6581/0, merge=0/0, ticks=3396/0, in_queue=3396, util=99.19% 00:42:10.337 nvme0n4: ios=6053/0, merge=0/0, ticks=2691/0, in_queue=2691, util=96.71% 00:42:10.595 01:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:10.595 01:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:42:11.162 01:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:11.162 01:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:42:11.420 01:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:11.420 01:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:42:11.678 01:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:11.678 01:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:42:11.936 01:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:42:11.936 01:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3191956 00:42:11.936 01:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:42:11.936 01:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:12.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:12.870 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:12.870 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:42:12.870 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:12.870 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:12.870 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:12.870 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:12.870 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:42:12.870 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:42:12.870 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:42:12.870 nvmf hotplug test: fio failed as expected 00:42:12.870 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:13.127 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:42:13.127 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:42:13.127 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:42:13.127 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:42:13.127 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:42:13.127 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:13.127 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:42:13.127 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:13.127 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:42:13.127 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:13.127 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:13.127 rmmod nvme_tcp 00:42:13.127 rmmod nvme_fabrics 00:42:13.127 rmmod nvme_keyring 00:42:13.127 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:13.127 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:42:13.127 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:42:13.127 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3189816 ']' 00:42:13.127 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3189816 00:42:13.127 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3189816 ']' 00:42:13.127 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3189816 00:42:13.127 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:42:13.127 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:13.127 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3189816 00:42:13.385 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:13.385 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:13.385 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3189816' 00:42:13.385 killing process with pid 3189816 00:42:13.385 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3189816 00:42:13.385 01:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3189816 00:42:14.761 01:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:14.761 01:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:14.761 01:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:14.761 01:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:42:14.761 01:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:42:14.761 01:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:14.761 01:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:42:14.761 01:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:14.761 01:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:14.762 01:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:14.762 01:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:14.762 01:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:16.667 00:42:16.667 real 0m27.157s 00:42:16.667 user 1m13.380s 00:42:16.667 sys 0m10.838s 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:16.667 ************************************ 00:42:16.667 END TEST nvmf_fio_target 00:42:16.667 ************************************ 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:16.667 ************************************ 00:42:16.667 START TEST nvmf_bdevio 00:42:16.667 ************************************ 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:16.667 * Looking for test storage... 00:42:16.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:16.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:16.667 --rc genhtml_branch_coverage=1 00:42:16.667 --rc genhtml_function_coverage=1 00:42:16.667 --rc genhtml_legend=1 00:42:16.667 --rc geninfo_all_blocks=1 00:42:16.667 --rc geninfo_unexecuted_blocks=1 00:42:16.667 00:42:16.667 ' 00:42:16.667 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:16.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:16.667 --rc genhtml_branch_coverage=1 00:42:16.667 --rc genhtml_function_coverage=1 00:42:16.668 --rc genhtml_legend=1 00:42:16.668 --rc geninfo_all_blocks=1 00:42:16.668 --rc geninfo_unexecuted_blocks=1 00:42:16.668 00:42:16.668 ' 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:16.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:16.668 --rc genhtml_branch_coverage=1 00:42:16.668 --rc genhtml_function_coverage=1 00:42:16.668 --rc genhtml_legend=1 00:42:16.668 --rc geninfo_all_blocks=1 00:42:16.668 --rc geninfo_unexecuted_blocks=1 00:42:16.668 00:42:16.668 ' 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:16.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:16.668 --rc genhtml_branch_coverage=1 00:42:16.668 --rc genhtml_function_coverage=1 00:42:16.668 --rc genhtml_legend=1 00:42:16.668 --rc geninfo_all_blocks=1 00:42:16.668 --rc geninfo_unexecuted_blocks=1 00:42:16.668 00:42:16.668 ' 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:42:16.668 01:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:19.200 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:19.200 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:19.200 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:19.200 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:19.200 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:19.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:19.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:42:19.201 00:42:19.201 --- 10.0.0.2 ping statistics --- 00:42:19.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:19.201 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:19.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:19.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:42:19.201 00:42:19.201 --- 10.0.0.1 ping statistics --- 00:42:19.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:19.201 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3194927 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3194927 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3194927 ']' 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:19.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:19.201 01:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:19.201 [2024-12-06 01:19:51.540107] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:19.201 [2024-12-06 01:19:51.542833] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:42:19.201 [2024-12-06 01:19:51.542964] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:19.201 [2024-12-06 01:19:51.691551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:19.201 [2024-12-06 01:19:51.832010] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:19.201 [2024-12-06 01:19:51.832094] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:19.201 [2024-12-06 01:19:51.832129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:19.201 [2024-12-06 01:19:51.832151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:19.201 [2024-12-06 01:19:51.832176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:19.201 [2024-12-06 01:19:51.835072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:42:19.201 [2024-12-06 01:19:51.835125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:42:19.201 [2024-12-06 01:19:51.835169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:19.201 [2024-12-06 01:19:51.835181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:42:19.768 [2024-12-06 01:19:52.229278] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:19.768 [2024-12-06 01:19:52.230431] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:19.768 [2024-12-06 01:19:52.231639] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:19.768 [2024-12-06 01:19:52.232521] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:19.768 [2024-12-06 01:19:52.232850] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:20.027 [2024-12-06 01:19:52.572271] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:20.027 Malloc0 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:20.027 [2024-12-06 01:19:52.696553] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:20.027 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.028 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:42:20.028 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:42:20.028 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:42:20.028 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:42:20.028 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:20.028 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:20.028 { 00:42:20.028 "params": { 00:42:20.028 "name": "Nvme$subsystem", 00:42:20.028 "trtype": "$TEST_TRANSPORT", 00:42:20.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:20.028 "adrfam": "ipv4", 00:42:20.028 "trsvcid": "$NVMF_PORT", 00:42:20.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:20.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:20.028 "hdgst": ${hdgst:-false}, 00:42:20.028 "ddgst": ${ddgst:-false} 00:42:20.028 }, 00:42:20.028 "method": "bdev_nvme_attach_controller" 00:42:20.028 } 00:42:20.028 EOF 00:42:20.028 )") 00:42:20.028 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:42:20.028 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:42:20.028 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:42:20.028 01:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:20.028 "params": { 00:42:20.028 "name": "Nvme1", 00:42:20.028 "trtype": "tcp", 00:42:20.028 "traddr": "10.0.0.2", 00:42:20.028 "adrfam": "ipv4", 00:42:20.028 "trsvcid": "4420", 00:42:20.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:20.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:20.028 "hdgst": false, 00:42:20.028 "ddgst": false 00:42:20.028 }, 00:42:20.028 "method": "bdev_nvme_attach_controller" 00:42:20.028 }' 00:42:20.286 [2024-12-06 01:19:52.781831] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:42:20.286 [2024-12-06 01:19:52.781966] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3195087 ] 00:42:20.286 [2024-12-06 01:19:52.919014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:20.545 [2024-12-06 01:19:53.051406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:20.545 [2024-12-06 01:19:53.051453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:20.545 [2024-12-06 01:19:53.051457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:21.111 I/O targets: 00:42:21.111 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:42:21.111 00:42:21.111 00:42:21.111 CUnit - A unit testing framework for C - Version 2.1-3 00:42:21.111 http://cunit.sourceforge.net/ 00:42:21.111 00:42:21.111 00:42:21.111 Suite: bdevio tests on: Nvme1n1 00:42:21.111 Test: blockdev write read block ...passed 00:42:21.111 Test: blockdev write zeroes read block ...passed 00:42:21.111 Test: blockdev write zeroes read no split ...passed 00:42:21.111 Test: blockdev write zeroes read split ...passed 00:42:21.111 Test: blockdev write zeroes read split partial ...passed 00:42:21.111 Test: blockdev reset ...[2024-12-06 01:19:53.713529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:42:21.111 [2024-12-06 01:19:53.713733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:42:21.112 [2024-12-06 01:19:53.767306] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:42:21.112 passed 00:42:21.112 Test: blockdev write read 8 blocks ...passed 00:42:21.112 Test: blockdev write read size > 128k ...passed 00:42:21.112 Test: blockdev write read invalid size ...passed 00:42:21.370 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:21.370 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:21.370 Test: blockdev write read max offset ...passed 00:42:21.370 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:21.370 Test: blockdev writev readv 8 blocks ...passed 00:42:21.370 Test: blockdev writev readv 30 x 1block ...passed 00:42:21.370 Test: blockdev writev readv block ...passed 00:42:21.370 Test: blockdev writev readv size > 128k ...passed 00:42:21.370 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:21.370 Test: blockdev comparev and writev ...[2024-12-06 01:19:54.067993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:21.370 [2024-12-06 01:19:54.068049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.370 [2024-12-06 01:19:54.068088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:21.370 [2024-12-06 01:19:54.068115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:21.370 [2024-12-06 01:19:54.068675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:21.370 [2024-12-06 01:19:54.068709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:42:21.370 [2024-12-06 01:19:54.068743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:21.370 [2024-12-06 01:19:54.068768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:42:21.370 [2024-12-06 01:19:54.069334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:21.370 [2024-12-06 01:19:54.069367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:42:21.370 [2024-12-06 01:19:54.069400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:21.370 [2024-12-06 01:19:54.069425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:42:21.370 [2024-12-06 01:19:54.069949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:21.370 [2024-12-06 01:19:54.069982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:42:21.370 [2024-12-06 01:19:54.070014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:21.370 [2024-12-06 01:19:54.070039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:42:21.628 passed 00:42:21.628 Test: blockdev nvme passthru rw ...passed 00:42:21.628 Test: blockdev nvme passthru vendor specific ...[2024-12-06 01:19:54.153159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:21.628 [2024-12-06 01:19:54.153198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:42:21.628 [2024-12-06 01:19:54.153418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:21.628 [2024-12-06 01:19:54.153459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:42:21.628 [2024-12-06 01:19:54.153678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:21.628 [2024-12-06 01:19:54.153710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:42:21.628 [2024-12-06 01:19:54.153924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:21.628 [2024-12-06 01:19:54.153956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:42:21.628 passed 00:42:21.628 Test: blockdev nvme admin passthru ...passed 00:42:21.628 Test: blockdev copy ...passed 00:42:21.628 00:42:21.628 Run Summary: Type Total Ran Passed Failed Inactive 00:42:21.628 suites 1 1 n/a 0 0 00:42:21.628 tests 23 23 23 0 0 00:42:21.628 asserts 152 152 152 0 n/a 00:42:21.628 00:42:21.628 Elapsed time = 1.433 seconds 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:22.621 rmmod nvme_tcp 00:42:22.621 rmmod nvme_fabrics 00:42:22.621 rmmod nvme_keyring 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3194927 ']' 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3194927 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3194927 ']' 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3194927 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3194927 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3194927' 00:42:22.621 killing process with pid 3194927 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3194927 00:42:22.621 01:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3194927 00:42:23.995 01:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:23.995 01:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:23.995 01:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:23.995 01:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:42:23.995 01:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:42:23.995 01:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:23.995 01:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:42:23.995 01:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:23.995 01:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:23.995 01:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:23.995 01:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:23.995 01:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:25.899 01:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:25.899 00:42:25.899 real 0m9.424s 00:42:25.899 user 0m17.033s 00:42:25.899 sys 0m3.147s 00:42:25.899 01:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:25.899 01:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:25.899 ************************************ 00:42:25.899 END TEST nvmf_bdevio 00:42:25.899 ************************************ 00:42:25.899 01:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:42:25.899 00:42:25.899 real 4m29.850s 00:42:25.899 user 9m54.387s 00:42:25.899 sys 1m28.375s 00:42:25.899 01:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:25.899 01:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:25.899 ************************************ 00:42:25.899 END TEST nvmf_target_core_interrupt_mode 00:42:25.899 ************************************ 00:42:26.158 01:19:58 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:26.158 01:19:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:26.158 01:19:58 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:26.158 01:19:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:26.158 ************************************ 00:42:26.158 START TEST nvmf_interrupt 00:42:26.158 ************************************ 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:26.158 * Looking for test storage... 00:42:26.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:26.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:26.158 --rc genhtml_branch_coverage=1 00:42:26.158 --rc genhtml_function_coverage=1 00:42:26.158 --rc genhtml_legend=1 00:42:26.158 --rc geninfo_all_blocks=1 00:42:26.158 --rc geninfo_unexecuted_blocks=1 00:42:26.158 00:42:26.158 ' 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:26.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:26.158 --rc genhtml_branch_coverage=1 00:42:26.158 --rc genhtml_function_coverage=1 00:42:26.158 --rc genhtml_legend=1 00:42:26.158 --rc geninfo_all_blocks=1 00:42:26.158 --rc geninfo_unexecuted_blocks=1 00:42:26.158 00:42:26.158 ' 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:26.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:26.158 --rc genhtml_branch_coverage=1 00:42:26.158 --rc genhtml_function_coverage=1 00:42:26.158 --rc genhtml_legend=1 00:42:26.158 --rc geninfo_all_blocks=1 00:42:26.158 --rc geninfo_unexecuted_blocks=1 00:42:26.158 00:42:26.158 ' 00:42:26.158 01:19:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:26.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:26.159 --rc genhtml_branch_coverage=1 00:42:26.159 --rc genhtml_function_coverage=1 00:42:26.159 --rc genhtml_legend=1 00:42:26.159 --rc geninfo_all_blocks=1 00:42:26.159 --rc geninfo_unexecuted_blocks=1 00:42:26.159 00:42:26.159 ' 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:42:26.159 01:19:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:28.685 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:28.685 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:28.685 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:28.685 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:28.685 01:20:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:28.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:28.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:42:28.685 00:42:28.685 --- 10.0.0.2 ping statistics --- 00:42:28.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:28.685 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:28.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:28.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:42:28.685 00:42:28.685 --- 10.0.0.1 ping statistics --- 00:42:28.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:28.685 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3197439 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3197439 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3197439 ']' 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:28.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:28.685 01:20:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:28.686 [2024-12-06 01:20:01.149956] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:28.686 [2024-12-06 01:20:01.152936] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:42:28.686 [2024-12-06 01:20:01.153047] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:28.686 [2024-12-06 01:20:01.318548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:28.942 [2024-12-06 01:20:01.458676] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:28.942 [2024-12-06 01:20:01.458789] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:28.942 [2024-12-06 01:20:01.458821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:28.942 [2024-12-06 01:20:01.458841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:28.942 [2024-12-06 01:20:01.458883] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:28.942 [2024-12-06 01:20:01.461432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:28.942 [2024-12-06 01:20:01.461440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:29.200 [2024-12-06 01:20:01.839980] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:29.200 [2024-12-06 01:20:01.840676] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:29.200 [2024-12-06 01:20:01.840984] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:29.458 01:20:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:29.458 01:20:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:42:29.458 01:20:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:29.458 01:20:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:29.458 01:20:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:29.458 01:20:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:29.458 01:20:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:42:29.458 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:42:29.458 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:42:29.458 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:42:29.458 5000+0 records in 00:42:29.458 5000+0 records out 00:42:29.458 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0110904 s, 923 MB/s 00:42:29.458 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:42:29.458 01:20:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.458 01:20:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:29.458 AIO0 00:42:29.458 01:20:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.458 01:20:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:42:29.458 01:20:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.458 01:20:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:29.458 [2024-12-06 01:20:02.158587] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:29.716 [2024-12-06 01:20:02.186919] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3197439 0 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3197439 0 idle 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3197439 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3197439 -w 256 00:42:29.716 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3197439 root 20 0 20.1t 197760 102144 S 0.0 0.3 0:00.80 reactor_0' 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3197439 root 20 0 20.1t 197760 102144 S 0.0 0.3 0:00.80 reactor_0 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3197439 1 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3197439 1 idle 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3197439 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3197439 -w 256 00:42:29.717 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3197443 root 20 0 20.1t 197760 102144 S 0.0 0.3 0:00.00 reactor_1' 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3197443 root 20 0 20.1t 197760 102144 S 0.0 0.3 0:00.00 reactor_1 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3197614 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3197439 0 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3197439 0 busy 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3197439 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3197439 -w 256 00:42:29.976 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:30.234 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3197439 root 20 0 20.1t 198912 102912 S 0.0 0.3 0:00.80 reactor_0' 00:42:30.234 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3197439 root 20 0 20.1t 198912 102912 S 0.0 0.3 0:00.80 reactor_0 00:42:30.234 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:30.234 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:30.234 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:30.234 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:30.234 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:30.234 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:30.234 01:20:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:42:31.169 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:42:31.169 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:31.169 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3197439 -w 256 00:42:31.169 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:31.169 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3197439 root 20 0 20.1t 211200 102912 R 99.9 0.3 0:02.87 reactor_0' 00:42:31.169 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3197439 root 20 0 20.1t 211200 102912 R 99.9 0.3 0:02.87 reactor_0 00:42:31.169 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3197439 1 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3197439 1 busy 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3197439 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3197439 -w 256 00:42:31.170 01:20:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:31.428 01:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3197443 root 20 0 20.1t 211200 102912 R 99.9 0.3 0:01.21 reactor_1' 00:42:31.429 01:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3197443 root 20 0 20.1t 211200 102912 R 99.9 0.3 0:01.21 reactor_1 00:42:31.429 01:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:31.429 01:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:31.429 01:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:42:31.429 01:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:42:31.429 01:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:31.429 01:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:31.429 01:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:31.429 01:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:31.429 01:20:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3197614 00:42:41.402 Initializing NVMe Controllers 00:42:41.402 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:41.402 Controller IO queue size 256, less than required. 00:42:41.402 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:41.402 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:42:41.402 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:42:41.402 Initialization complete. Launching workers. 00:42:41.402 ======================================================== 00:42:41.402 Latency(us) 00:42:41.402 Device Information : IOPS MiB/s Average min max 00:42:41.402 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 10923.40 42.67 23453.73 6422.42 28385.96 00:42:41.402 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 10654.10 41.62 24048.38 6864.75 29081.06 00:42:41.402 ======================================================== 00:42:41.402 Total : 21577.50 84.29 23747.35 6422.42 29081.06 00:42:41.402 00:42:41.402 01:20:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:41.402 01:20:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3197439 0 00:42:41.402 01:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3197439 0 idle 00:42:41.402 01:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3197439 00:42:41.402 01:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:41.402 01:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:41.402 01:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:41.402 01:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:41.402 01:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:41.402 01:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:41.402 01:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:41.402 01:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:41.402 01:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:41.402 01:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3197439 -w 256 00:42:41.402 01:20:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:41.402 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3197439 root 20 0 20.1t 211200 102912 S 0.0 0.3 0:20.75 reactor_0' 00:42:41.402 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3197439 root 20 0 20.1t 211200 102912 S 0.0 0.3 0:20.75 reactor_0 00:42:41.402 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:41.402 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:41.402 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:41.402 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:41.402 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:41.402 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:41.402 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3197439 1 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3197439 1 idle 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3197439 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3197439 -w 256 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3197443 root 20 0 20.1t 211200 102912 S 0.0 0.3 0:09.98 reactor_1' 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3197443 root 20 0 20.1t 211200 102912 S 0.0 0.3 0:09.98 reactor_1 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:42:41.403 01:20:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3197439 0 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3197439 0 idle 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3197439 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3197439 -w 256 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3197439 root 20 0 20.1t 239232 112896 S 0.0 0.4 0:20.94 reactor_0' 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3197439 root 20 0 20.1t 239232 112896 S 0.0 0.4 0:20.94 reactor_0 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3197439 1 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3197439 1 idle 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3197439 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3197439 -w 256 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3197443 root 20 0 20.1t 239232 112896 S 0.0 0.4 0:10.06 reactor_1' 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3197443 root 20 0 20.1t 239232 112896 S 0.0 0.4 0:10.06 reactor_1 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:43.310 01:20:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:43.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:43.569 01:20:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:43.569 01:20:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:42:43.569 01:20:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:43.569 01:20:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:43.569 01:20:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:43.569 01:20:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:43.569 01:20:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:42:43.569 01:20:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:42:43.569 01:20:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:42:43.569 01:20:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:43.569 01:20:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:42:43.569 01:20:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:43.569 01:20:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:42:43.569 01:20:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:43.569 01:20:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:43.569 rmmod nvme_tcp 00:42:43.827 rmmod nvme_fabrics 00:42:43.827 rmmod nvme_keyring 00:42:43.827 01:20:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:43.827 01:20:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:42:43.827 01:20:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:42:43.827 01:20:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3197439 ']' 00:42:43.827 01:20:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3197439 00:42:43.827 01:20:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3197439 ']' 00:42:43.827 01:20:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3197439 00:42:43.827 01:20:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:42:43.827 01:20:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:43.827 01:20:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3197439 00:42:43.827 01:20:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:43.827 01:20:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:43.827 01:20:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3197439' 00:42:43.827 killing process with pid 3197439 00:42:43.827 01:20:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3197439 00:42:43.827 01:20:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3197439 00:42:45.204 01:20:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:45.204 01:20:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:45.204 01:20:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:45.204 01:20:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:42:45.204 01:20:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:42:45.204 01:20:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:45.204 01:20:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:42:45.204 01:20:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:45.204 01:20:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:45.204 01:20:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:45.204 01:20:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:45.204 01:20:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:47.117 01:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:47.117 00:42:47.117 real 0m20.976s 00:42:47.117 user 0m39.165s 00:42:47.117 sys 0m6.713s 00:42:47.117 01:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:47.117 01:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:47.117 ************************************ 00:42:47.117 END TEST nvmf_interrupt 00:42:47.117 ************************************ 00:42:47.117 00:42:47.117 real 35m41.035s 00:42:47.117 user 93m34.813s 00:42:47.117 sys 7m55.880s 00:42:47.117 01:20:19 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:47.117 01:20:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:47.117 ************************************ 00:42:47.117 END TEST nvmf_tcp 00:42:47.117 ************************************ 00:42:47.117 01:20:19 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:42:47.117 01:20:19 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:47.117 01:20:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:47.117 01:20:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:47.117 01:20:19 -- common/autotest_common.sh@10 -- # set +x 00:42:47.117 ************************************ 00:42:47.117 START TEST spdkcli_nvmf_tcp 00:42:47.117 ************************************ 00:42:47.117 01:20:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:47.117 * Looking for test storage... 00:42:47.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:42:47.117 01:20:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:47.117 01:20:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:42:47.117 01:20:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:47.117 01:20:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:47.117 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:47.117 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:47.117 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:47.117 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:42:47.117 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:42:47.117 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:42:47.117 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:42:47.117 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:42:47.117 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:42:47.117 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:42:47.117 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:47.117 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:42:47.117 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:42:47.117 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:47.117 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:47.117 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:42:47.377 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:47.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:47.378 --rc genhtml_branch_coverage=1 00:42:47.378 --rc genhtml_function_coverage=1 00:42:47.378 --rc genhtml_legend=1 00:42:47.378 --rc geninfo_all_blocks=1 00:42:47.378 --rc geninfo_unexecuted_blocks=1 00:42:47.378 00:42:47.378 ' 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:47.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:47.378 --rc genhtml_branch_coverage=1 00:42:47.378 --rc genhtml_function_coverage=1 00:42:47.378 --rc genhtml_legend=1 00:42:47.378 --rc geninfo_all_blocks=1 00:42:47.378 --rc geninfo_unexecuted_blocks=1 00:42:47.378 00:42:47.378 ' 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:47.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:47.378 --rc genhtml_branch_coverage=1 00:42:47.378 --rc genhtml_function_coverage=1 00:42:47.378 --rc genhtml_legend=1 00:42:47.378 --rc geninfo_all_blocks=1 00:42:47.378 --rc geninfo_unexecuted_blocks=1 00:42:47.378 00:42:47.378 ' 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:47.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:47.378 --rc genhtml_branch_coverage=1 00:42:47.378 --rc genhtml_function_coverage=1 00:42:47.378 --rc genhtml_legend=1 00:42:47.378 --rc geninfo_all_blocks=1 00:42:47.378 --rc geninfo_unexecuted_blocks=1 00:42:47.378 00:42:47.378 ' 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:47.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:42:47.378 01:20:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:42:47.379 01:20:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:42:47.379 01:20:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:42:47.379 01:20:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:47.379 01:20:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:47.379 01:20:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:42:47.379 01:20:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3199849 00:42:47.379 01:20:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:42:47.379 01:20:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3199849 00:42:47.379 01:20:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3199849 ']' 00:42:47.379 01:20:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:47.379 01:20:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:47.379 01:20:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:47.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:47.379 01:20:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:47.379 01:20:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:47.379 [2024-12-06 01:20:19.937910] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:42:47.379 [2024-12-06 01:20:19.938080] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3199849 ] 00:42:47.639 [2024-12-06 01:20:20.087538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:47.639 [2024-12-06 01:20:20.227928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:47.639 [2024-12-06 01:20:20.227930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:48.211 01:20:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:48.211 01:20:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:42:48.211 01:20:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:42:48.211 01:20:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:48.211 01:20:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:48.211 01:20:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:42:48.211 01:20:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:42:48.211 01:20:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:42:48.211 01:20:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:48.211 01:20:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:48.211 01:20:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:42:48.211 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:42:48.211 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:42:48.211 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:42:48.211 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:42:48.211 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:42:48.211 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:42:48.211 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:48.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:42:48.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:42:48.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:48.211 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:48.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:42:48.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:48.211 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:48.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:42:48.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:48.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:48.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:48.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:48.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:42:48.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:42:48.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:48.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:42:48.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:48.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:42:48.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:42:48.211 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:42:48.211 ' 00:42:51.511 [2024-12-06 01:20:23.751614] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:52.448 [2024-12-06 01:20:25.037543] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:42:54.978 [2024-12-06 01:20:27.389291] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:42:56.883 [2024-12-06 01:20:29.428045] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:42:58.788 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:42:58.788 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:42:58.788 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:42:58.788 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:42:58.788 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:42:58.788 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:42:58.788 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:42:58.788 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:58.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:42:58.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:42:58.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:58.788 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:58.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:42:58.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:58.788 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:58.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:42:58.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:58.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:58.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:58.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:58.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:42:58.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:42:58.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:58.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:42:58.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:58.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:42:58.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:42:58.788 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:42:58.788 01:20:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:42:58.788 01:20:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:58.788 01:20:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:58.788 01:20:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:42:58.788 01:20:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:58.788 01:20:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:58.788 01:20:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:42:58.788 01:20:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:42:59.047 01:20:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:42:59.047 01:20:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:42:59.047 01:20:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:42:59.047 01:20:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:59.047 01:20:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:59.047 01:20:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:42:59.047 01:20:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:59.047 01:20:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:59.047 01:20:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:42:59.047 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:42:59.047 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:59.047 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:42:59.047 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:42:59.047 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:42:59.047 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:42:59.047 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:59.047 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:42:59.047 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:42:59.047 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:42:59.047 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:42:59.047 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:42:59.047 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:42:59.047 ' 00:43:05.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:43:05.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:43:05.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:05.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:43:05.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:43:05.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:43:05.719 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:43:05.719 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:05.719 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:43:05.719 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:43:05.719 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:43:05.719 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:43:05.719 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:43:05.719 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:43:05.719 01:20:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:43:05.719 01:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:05.719 01:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:05.719 01:20:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3199849 00:43:05.719 01:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3199849 ']' 00:43:05.719 01:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3199849 00:43:05.719 01:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:43:05.719 01:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:05.719 01:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3199849 00:43:05.719 01:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:05.719 01:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:05.719 01:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3199849' 00:43:05.719 killing process with pid 3199849 00:43:05.719 01:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3199849 00:43:05.719 01:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3199849 00:43:05.977 01:20:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:43:05.977 01:20:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:43:05.977 01:20:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3199849 ']' 00:43:05.977 01:20:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3199849 00:43:05.977 01:20:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3199849 ']' 00:43:05.977 01:20:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3199849 00:43:05.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3199849) - No such process 00:43:05.977 01:20:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3199849 is not found' 00:43:05.977 Process with pid 3199849 is not found 00:43:05.977 01:20:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:43:05.977 01:20:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:43:05.977 01:20:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:43:05.977 00:43:05.977 real 0m18.990s 00:43:05.977 user 0m39.737s 00:43:05.977 sys 0m1.047s 00:43:05.977 01:20:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:05.977 01:20:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:05.977 ************************************ 00:43:05.977 END TEST spdkcli_nvmf_tcp 00:43:05.977 ************************************ 00:43:06.236 01:20:38 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:06.236 01:20:38 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:06.236 01:20:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:06.236 01:20:38 -- common/autotest_common.sh@10 -- # set +x 00:43:06.236 ************************************ 00:43:06.236 START TEST nvmf_identify_passthru 00:43:06.236 ************************************ 00:43:06.236 01:20:38 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:06.236 * Looking for test storage... 00:43:06.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:06.236 01:20:38 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:06.236 01:20:38 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:43:06.236 01:20:38 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:06.236 01:20:38 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:06.236 01:20:38 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:06.236 01:20:38 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:06.236 01:20:38 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:06.236 01:20:38 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:43:06.236 01:20:38 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:43:06.236 01:20:38 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:43:06.236 01:20:38 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:43:06.236 01:20:38 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:43:06.236 01:20:38 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:43:06.236 01:20:38 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:43:06.236 01:20:38 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:06.236 01:20:38 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:43:06.236 01:20:38 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:43:06.236 01:20:38 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:43:06.237 01:20:38 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:06.237 01:20:38 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:06.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:06.237 --rc genhtml_branch_coverage=1 00:43:06.237 --rc genhtml_function_coverage=1 00:43:06.237 --rc genhtml_legend=1 00:43:06.237 --rc geninfo_all_blocks=1 00:43:06.237 --rc geninfo_unexecuted_blocks=1 00:43:06.237 00:43:06.237 ' 00:43:06.237 01:20:38 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:06.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:06.237 --rc genhtml_branch_coverage=1 00:43:06.237 --rc genhtml_function_coverage=1 00:43:06.237 --rc genhtml_legend=1 00:43:06.237 --rc geninfo_all_blocks=1 00:43:06.237 --rc geninfo_unexecuted_blocks=1 00:43:06.237 00:43:06.237 ' 00:43:06.237 01:20:38 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:06.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:06.237 --rc genhtml_branch_coverage=1 00:43:06.237 --rc genhtml_function_coverage=1 00:43:06.237 --rc genhtml_legend=1 00:43:06.237 --rc geninfo_all_blocks=1 00:43:06.237 --rc geninfo_unexecuted_blocks=1 00:43:06.237 00:43:06.237 ' 00:43:06.237 01:20:38 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:06.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:06.237 --rc genhtml_branch_coverage=1 00:43:06.237 --rc genhtml_function_coverage=1 00:43:06.237 --rc genhtml_legend=1 00:43:06.237 --rc geninfo_all_blocks=1 00:43:06.237 --rc geninfo_unexecuted_blocks=1 00:43:06.237 00:43:06.237 ' 00:43:06.237 01:20:38 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:06.237 01:20:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:06.237 01:20:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:06.237 01:20:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:06.237 01:20:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:06.237 01:20:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:06.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:06.237 01:20:38 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:06.237 01:20:38 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:06.237 01:20:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:06.237 01:20:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:06.237 01:20:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:06.237 01:20:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:06.237 01:20:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:06.237 01:20:38 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:06.237 01:20:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:06.237 01:20:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:06.237 01:20:38 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:43:06.237 01:20:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:43:08.778 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:08.778 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:43:08.779 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:43:08.779 Found net devices under 0000:0a:00.0: cvl_0_0 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:43:08.779 Found net devices under 0000:0a:00.1: cvl_0_1 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:08.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:08.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:43:08.779 00:43:08.779 --- 10.0.0.2 ping statistics --- 00:43:08.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:08.779 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:08.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:08.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:43:08.779 00:43:08.779 --- 10.0.0.1 ping statistics --- 00:43:08.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:08.779 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:08.779 01:20:40 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:08.779 01:20:41 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:43:08.779 01:20:41 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:08.779 01:20:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:08.779 01:20:41 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:43:08.779 01:20:41 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:43:08.779 01:20:41 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:43:08.779 01:20:41 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:43:08.779 01:20:41 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:43:08.779 01:20:41 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:43:08.779 01:20:41 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:43:08.779 01:20:41 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:43:08.779 01:20:41 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:43:08.779 01:20:41 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:43:08.779 01:20:41 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:43:08.779 01:20:41 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:43:08.779 01:20:41 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:43:08.779 01:20:41 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:43:08.779 01:20:41 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:43:08.779 01:20:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:43:08.779 01:20:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:43:08.779 01:20:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:43:12.989 01:20:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:43:12.989 01:20:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:43:12.989 01:20:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:43:12.989 01:20:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:43:18.265 01:20:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:43:18.265 01:20:49 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:43:18.265 01:20:49 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:18.265 01:20:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:18.265 01:20:49 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:43:18.265 01:20:49 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:18.265 01:20:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:18.265 01:20:49 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3204776 00:43:18.265 01:20:49 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:43:18.265 01:20:49 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:18.265 01:20:49 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3204776 00:43:18.265 01:20:49 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3204776 ']' 00:43:18.265 01:20:49 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:18.265 01:20:49 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:18.265 01:20:49 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:18.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:18.265 01:20:49 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:18.265 01:20:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:18.265 [2024-12-06 01:20:50.023496] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:43:18.265 [2024-12-06 01:20:50.023671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:18.265 [2024-12-06 01:20:50.173148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:18.265 [2024-12-06 01:20:50.312926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:18.265 [2024-12-06 01:20:50.313059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:18.265 [2024-12-06 01:20:50.313086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:18.265 [2024-12-06 01:20:50.313110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:18.265 [2024-12-06 01:20:50.313129] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:18.265 [2024-12-06 01:20:50.316057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:18.265 [2024-12-06 01:20:50.316130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:18.265 [2024-12-06 01:20:50.316228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:18.265 [2024-12-06 01:20:50.316235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:18.524 01:20:50 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:18.524 01:20:50 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:43:18.524 01:20:50 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:43:18.524 01:20:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:18.524 01:20:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:18.524 INFO: Log level set to 20 00:43:18.524 INFO: Requests: 00:43:18.524 { 00:43:18.524 "jsonrpc": "2.0", 00:43:18.524 "method": "nvmf_set_config", 00:43:18.524 "id": 1, 00:43:18.524 "params": { 00:43:18.524 "admin_cmd_passthru": { 00:43:18.524 "identify_ctrlr": true 00:43:18.524 } 00:43:18.524 } 00:43:18.524 } 00:43:18.524 00:43:18.524 INFO: response: 00:43:18.524 { 00:43:18.524 "jsonrpc": "2.0", 00:43:18.524 "id": 1, 00:43:18.524 "result": true 00:43:18.524 } 00:43:18.524 00:43:18.524 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:18.524 01:20:51 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:43:18.524 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:18.524 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:18.524 INFO: Setting log level to 20 00:43:18.524 INFO: Setting log level to 20 00:43:18.524 INFO: Log level set to 20 00:43:18.524 INFO: Log level set to 20 00:43:18.524 INFO: Requests: 00:43:18.524 { 00:43:18.524 "jsonrpc": "2.0", 00:43:18.524 "method": "framework_start_init", 00:43:18.524 "id": 1 00:43:18.524 } 00:43:18.524 00:43:18.524 INFO: Requests: 00:43:18.524 { 00:43:18.524 "jsonrpc": "2.0", 00:43:18.524 "method": "framework_start_init", 00:43:18.524 "id": 1 00:43:18.524 } 00:43:18.524 00:43:18.782 [2024-12-06 01:20:51.342681] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:43:18.782 INFO: response: 00:43:18.782 { 00:43:18.782 "jsonrpc": "2.0", 00:43:18.782 "id": 1, 00:43:18.782 "result": true 00:43:18.782 } 00:43:18.782 00:43:18.782 INFO: response: 00:43:18.782 { 00:43:18.782 "jsonrpc": "2.0", 00:43:18.782 "id": 1, 00:43:18.782 "result": true 00:43:18.782 } 00:43:18.782 00:43:18.782 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:18.782 01:20:51 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:18.782 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:18.782 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:18.782 INFO: Setting log level to 40 00:43:18.782 INFO: Setting log level to 40 00:43:18.782 INFO: Setting log level to 40 00:43:18.782 [2024-12-06 01:20:51.355686] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:18.782 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:18.782 01:20:51 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:43:18.782 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:18.782 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:18.782 01:20:51 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:43:18.782 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:18.782 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:22.072 Nvme0n1 00:43:22.072 01:20:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.072 01:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:43:22.072 01:20:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.072 01:20:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:22.072 01:20:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.072 01:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:43:22.072 01:20:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.072 01:20:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:22.072 01:20:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.072 01:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:22.072 01:20:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.072 01:20:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:22.072 [2024-12-06 01:20:54.310548] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:22.072 01:20:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.072 01:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:43:22.072 01:20:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.072 01:20:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:22.072 [ 00:43:22.072 { 00:43:22.072 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:43:22.072 "subtype": "Discovery", 00:43:22.072 "listen_addresses": [], 00:43:22.072 "allow_any_host": true, 00:43:22.072 "hosts": [] 00:43:22.072 }, 00:43:22.072 { 00:43:22.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:43:22.072 "subtype": "NVMe", 00:43:22.072 "listen_addresses": [ 00:43:22.072 { 00:43:22.072 "trtype": "TCP", 00:43:22.072 "adrfam": "IPv4", 00:43:22.072 "traddr": "10.0.0.2", 00:43:22.072 "trsvcid": "4420" 00:43:22.072 } 00:43:22.072 ], 00:43:22.072 "allow_any_host": true, 00:43:22.072 "hosts": [], 00:43:22.072 "serial_number": "SPDK00000000000001", 00:43:22.072 "model_number": "SPDK bdev Controller", 00:43:22.072 "max_namespaces": 1, 00:43:22.072 "min_cntlid": 1, 00:43:22.072 "max_cntlid": 65519, 00:43:22.072 "namespaces": [ 00:43:22.072 { 00:43:22.072 "nsid": 1, 00:43:22.072 "bdev_name": "Nvme0n1", 00:43:22.072 "name": "Nvme0n1", 00:43:22.072 "nguid": "4C4A6368F91240558B4E502AFD4F06F2", 00:43:22.072 "uuid": "4c4a6368-f912-4055-8b4e-502afd4f06f2" 00:43:22.072 } 00:43:22.072 ] 00:43:22.072 } 00:43:22.072 ] 00:43:22.072 01:20:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.072 01:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:22.072 01:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:43:22.072 01:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:43:22.072 01:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:43:22.072 01:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:22.072 01:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:43:22.072 01:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:43:22.642 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:43:22.642 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:43:22.642 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:43:22.642 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:22.642 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.642 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:22.642 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.642 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:43:22.642 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:43:22.642 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:22.642 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:43:22.642 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:22.642 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:43:22.642 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:22.642 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:22.642 rmmod nvme_tcp 00:43:22.642 rmmod nvme_fabrics 00:43:22.642 rmmod nvme_keyring 00:43:22.642 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:22.642 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:43:22.642 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:43:22.642 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3204776 ']' 00:43:22.642 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3204776 00:43:22.642 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3204776 ']' 00:43:22.642 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3204776 00:43:22.642 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:43:22.642 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:22.642 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3204776 00:43:22.642 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:22.642 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:22.642 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3204776' 00:43:22.642 killing process with pid 3204776 00:43:22.642 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3204776 00:43:22.642 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3204776 00:43:25.174 01:20:57 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:25.174 01:20:57 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:25.174 01:20:57 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:25.174 01:20:57 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:43:25.174 01:20:57 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:43:25.174 01:20:57 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:25.174 01:20:57 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:43:25.174 01:20:57 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:25.174 01:20:57 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:25.174 01:20:57 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:25.174 01:20:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:25.174 01:20:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:27.163 01:20:59 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:27.163 00:43:27.163 real 0m20.986s 00:43:27.163 user 0m34.424s 00:43:27.163 sys 0m3.618s 00:43:27.163 01:20:59 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:27.163 01:20:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:27.163 ************************************ 00:43:27.163 END TEST nvmf_identify_passthru 00:43:27.163 ************************************ 00:43:27.163 01:20:59 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:27.163 01:20:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:27.163 01:20:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:27.163 01:20:59 -- common/autotest_common.sh@10 -- # set +x 00:43:27.163 ************************************ 00:43:27.163 START TEST nvmf_dif 00:43:27.163 ************************************ 00:43:27.163 01:20:59 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:27.163 * Looking for test storage... 00:43:27.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:27.163 01:20:59 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:27.163 01:20:59 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:43:27.163 01:20:59 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:27.434 01:20:59 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:27.434 01:20:59 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:27.434 01:20:59 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:27.434 01:20:59 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:27.434 01:20:59 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:43:27.434 01:20:59 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:43:27.434 01:20:59 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:43:27.434 01:20:59 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:43:27.434 01:20:59 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:43:27.434 01:20:59 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:43:27.434 01:20:59 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:43:27.434 01:20:59 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:27.434 01:20:59 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:43:27.434 01:20:59 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:43:27.434 01:20:59 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:27.434 01:20:59 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:27.434 01:20:59 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:43:27.434 01:20:59 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:43:27.434 01:20:59 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:27.434 01:20:59 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:43:27.434 01:20:59 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:43:27.434 01:20:59 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:43:27.435 01:20:59 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:43:27.435 01:20:59 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:27.435 01:20:59 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:43:27.435 01:20:59 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:43:27.435 01:20:59 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:27.435 01:20:59 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:27.435 01:20:59 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:43:27.435 01:20:59 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:27.435 01:20:59 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:27.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:27.435 --rc genhtml_branch_coverage=1 00:43:27.435 --rc genhtml_function_coverage=1 00:43:27.435 --rc genhtml_legend=1 00:43:27.435 --rc geninfo_all_blocks=1 00:43:27.435 --rc geninfo_unexecuted_blocks=1 00:43:27.435 00:43:27.435 ' 00:43:27.435 01:20:59 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:27.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:27.435 --rc genhtml_branch_coverage=1 00:43:27.435 --rc genhtml_function_coverage=1 00:43:27.435 --rc genhtml_legend=1 00:43:27.435 --rc geninfo_all_blocks=1 00:43:27.435 --rc geninfo_unexecuted_blocks=1 00:43:27.435 00:43:27.435 ' 00:43:27.435 01:20:59 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:27.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:27.435 --rc genhtml_branch_coverage=1 00:43:27.435 --rc genhtml_function_coverage=1 00:43:27.435 --rc genhtml_legend=1 00:43:27.435 --rc geninfo_all_blocks=1 00:43:27.435 --rc geninfo_unexecuted_blocks=1 00:43:27.435 00:43:27.435 ' 00:43:27.435 01:20:59 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:27.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:27.435 --rc genhtml_branch_coverage=1 00:43:27.435 --rc genhtml_function_coverage=1 00:43:27.435 --rc genhtml_legend=1 00:43:27.435 --rc geninfo_all_blocks=1 00:43:27.435 --rc geninfo_unexecuted_blocks=1 00:43:27.435 00:43:27.435 ' 00:43:27.435 01:20:59 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:27.435 01:20:59 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:43:27.435 01:20:59 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:27.435 01:20:59 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:27.435 01:20:59 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:27.435 01:20:59 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:27.435 01:20:59 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:27.435 01:20:59 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:27.435 01:20:59 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:27.435 01:20:59 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:27.435 01:20:59 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:27.435 01:20:59 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:27.435 01:20:59 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:27.436 01:20:59 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:27.436 01:20:59 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:27.436 01:20:59 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:27.436 01:20:59 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:27.436 01:20:59 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:27.436 01:20:59 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:27.436 01:20:59 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:43:27.436 01:20:59 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:27.436 01:20:59 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:27.436 01:20:59 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:27.436 01:20:59 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:27.436 01:20:59 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:27.436 01:20:59 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:27.436 01:20:59 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:43:27.436 01:20:59 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:27.436 01:20:59 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:43:27.436 01:20:59 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:27.436 01:20:59 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:27.436 01:20:59 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:27.437 01:20:59 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:27.437 01:20:59 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:27.437 01:20:59 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:27.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:27.437 01:20:59 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:27.437 01:20:59 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:27.437 01:20:59 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:27.437 01:20:59 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:43:27.437 01:20:59 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:43:27.437 01:20:59 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:43:27.437 01:20:59 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:43:27.437 01:20:59 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:43:27.437 01:20:59 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:27.437 01:20:59 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:27.437 01:20:59 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:27.437 01:20:59 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:27.437 01:20:59 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:27.437 01:20:59 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:27.437 01:20:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:27.437 01:20:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:27.437 01:20:59 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:27.437 01:20:59 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:27.437 01:20:59 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:43:27.437 01:20:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:29.339 01:21:01 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:29.339 01:21:01 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:43:29.339 01:21:01 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:29.339 01:21:01 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:29.339 01:21:01 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:29.339 01:21:01 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:29.339 01:21:01 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:29.339 01:21:01 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:43:29.339 01:21:01 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:29.339 01:21:01 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:43:29.339 01:21:01 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:43:29.339 01:21:01 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:43:29.339 01:21:01 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:43:29.339 01:21:01 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:43:29.339 01:21:01 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:43:29.339 01:21:01 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:29.339 01:21:01 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:29.339 01:21:01 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:29.339 01:21:01 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:43:29.340 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:43:29.340 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:43:29.340 Found net devices under 0000:0a:00.0: cvl_0_0 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:43:29.340 Found net devices under 0000:0a:00.1: cvl_0_1 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:29.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:29.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:43:29.340 00:43:29.340 --- 10.0.0.2 ping statistics --- 00:43:29.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:29.340 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:29.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:29.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:43:29.340 00:43:29.340 --- 10.0.0.1 ping statistics --- 00:43:29.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:29.340 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:29.340 01:21:01 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:30.715 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:30.715 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:43:30.715 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:30.715 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:30.715 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:30.715 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:30.715 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:30.715 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:30.715 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:30.715 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:30.715 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:30.715 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:30.715 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:30.715 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:30.715 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:30.715 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:30.715 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:30.715 01:21:03 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:30.715 01:21:03 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:30.715 01:21:03 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:30.715 01:21:03 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:30.715 01:21:03 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:30.715 01:21:03 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:30.715 01:21:03 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:43:30.715 01:21:03 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:43:30.715 01:21:03 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:30.715 01:21:03 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:30.715 01:21:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:30.715 01:21:03 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3208519 00:43:30.715 01:21:03 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:43:30.715 01:21:03 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3208519 00:43:30.715 01:21:03 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3208519 ']' 00:43:30.715 01:21:03 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:30.715 01:21:03 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:30.715 01:21:03 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:30.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:30.715 01:21:03 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:30.715 01:21:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:30.715 [2024-12-06 01:21:03.414822] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:43:30.715 [2024-12-06 01:21:03.414991] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:30.972 [2024-12-06 01:21:03.569472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:31.228 [2024-12-06 01:21:03.709210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:31.228 [2024-12-06 01:21:03.709299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:31.228 [2024-12-06 01:21:03.709325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:31.229 [2024-12-06 01:21:03.709349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:31.229 [2024-12-06 01:21:03.709369] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:31.229 [2024-12-06 01:21:03.711010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:31.868 01:21:04 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:31.868 01:21:04 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:43:31.868 01:21:04 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:31.868 01:21:04 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:31.868 01:21:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:31.868 01:21:04 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:31.868 01:21:04 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:43:31.868 01:21:04 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:43:31.868 01:21:04 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.868 01:21:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:31.868 [2024-12-06 01:21:04.475796] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:31.868 01:21:04 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.868 01:21:04 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:43:31.868 01:21:04 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:31.868 01:21:04 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:31.868 01:21:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:31.868 ************************************ 00:43:31.868 START TEST fio_dif_1_default 00:43:31.868 ************************************ 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:31.868 bdev_null0 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:31.868 [2024-12-06 01:21:04.536131] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:31.868 { 00:43:31.868 "params": { 00:43:31.868 "name": "Nvme$subsystem", 00:43:31.868 "trtype": "$TEST_TRANSPORT", 00:43:31.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:31.868 "adrfam": "ipv4", 00:43:31.868 "trsvcid": "$NVMF_PORT", 00:43:31.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:31.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:31.868 "hdgst": ${hdgst:-false}, 00:43:31.868 "ddgst": ${ddgst:-false} 00:43:31.868 }, 00:43:31.868 "method": "bdev_nvme_attach_controller" 00:43:31.868 } 00:43:31.868 EOF 00:43:31.868 )") 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:43:31.868 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:31.869 01:21:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:43:31.869 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:31.869 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:31.869 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:43:31.869 01:21:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:43:31.869 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:31.869 01:21:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:43:31.869 01:21:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:43:31.869 01:21:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:43:31.869 01:21:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:31.869 "params": { 00:43:31.869 "name": "Nvme0", 00:43:31.869 "trtype": "tcp", 00:43:31.869 "traddr": "10.0.0.2", 00:43:31.869 "adrfam": "ipv4", 00:43:31.869 "trsvcid": "4420", 00:43:31.869 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:31.869 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:31.869 "hdgst": false, 00:43:31.869 "ddgst": false 00:43:31.869 }, 00:43:31.869 "method": "bdev_nvme_attach_controller" 00:43:31.869 }' 00:43:31.869 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:31.869 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:31.869 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # break 00:43:31.869 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:31.869 01:21:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:32.439 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:32.439 fio-3.35 00:43:32.439 Starting 1 thread 00:43:44.630 00:43:44.630 filename0: (groupid=0, jobs=1): err= 0: pid=3208913: Fri Dec 6 01:21:15 2024 00:43:44.630 read: IOPS=192, BW=771KiB/s (789kB/s)(7712KiB/10009msec) 00:43:44.630 slat (nsec): min=7683, max=77589, avg=13927.25, stdev=5682.62 00:43:44.630 clat (usec): min=694, max=43017, avg=20723.13, stdev=20196.74 00:43:44.630 lat (usec): min=709, max=43041, avg=20737.06, stdev=20196.53 00:43:44.630 clat percentiles (usec): 00:43:44.630 | 1.00th=[ 717], 5.00th=[ 734], 10.00th=[ 750], 20.00th=[ 766], 00:43:44.630 | 30.00th=[ 783], 40.00th=[ 807], 50.00th=[ 930], 60.00th=[41157], 00:43:44.630 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:44.630 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:43:44.630 | 99.99th=[43254] 00:43:44.630 bw ( KiB/s): min= 704, max= 896, per=99.80%, avg=769.60, stdev=33.60, samples=20 00:43:44.630 iops : min= 176, max= 224, avg=192.40, stdev= 8.40, samples=20 00:43:44.630 lat (usec) : 750=12.14%, 1000=38.28% 00:43:44.630 lat (msec) : 2=0.21%, 50=49.38% 00:43:44.630 cpu : usr=92.96%, sys=6.54%, ctx=36, majf=0, minf=1636 00:43:44.630 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:44.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:44.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:44.630 issued rwts: total=1928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:44.630 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:44.630 00:43:44.630 Run status group 0 (all jobs): 00:43:44.630 READ: bw=771KiB/s (789kB/s), 771KiB/s-771KiB/s (789kB/s-789kB/s), io=7712KiB (7897kB), run=10009-10009msec 00:43:44.630 ----------------------------------------------------- 00:43:44.630 Suppressions used: 00:43:44.630 count bytes template 00:43:44.630 1 8 /usr/src/fio/parse.c 00:43:44.630 1 8 libtcmalloc_minimal.so 00:43:44.630 1 904 libcrypto.so 00:43:44.630 ----------------------------------------------------- 00:43:44.630 00:43:44.630 01:21:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:43:44.630 01:21:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:43:44.630 01:21:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:43:44.630 01:21:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:44.630 01:21:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:43:44.630 01:21:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:44.630 01:21:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.630 01:21:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:44.630 01:21:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:44.630 01:21:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:44.630 01:21:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.630 01:21:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:44.630 01:21:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:44.630 00:43:44.630 real 0m12.463s 00:43:44.630 user 0m11.683s 00:43:44.630 sys 0m1.148s 00:43:44.630 01:21:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:44.630 01:21:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:44.630 ************************************ 00:43:44.630 END TEST fio_dif_1_default 00:43:44.630 ************************************ 00:43:44.630 01:21:16 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:43:44.630 01:21:16 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:44.630 01:21:16 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:44.630 01:21:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:44.630 ************************************ 00:43:44.630 START TEST fio_dif_1_multi_subsystems 00:43:44.630 ************************************ 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:44.630 bdev_null0 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:44.630 [2024-12-06 01:21:17.049628] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:44.630 bdev_null1 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:44.630 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:44.631 { 00:43:44.631 "params": { 00:43:44.631 "name": "Nvme$subsystem", 00:43:44.631 "trtype": "$TEST_TRANSPORT", 00:43:44.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:44.631 "adrfam": "ipv4", 00:43:44.631 "trsvcid": "$NVMF_PORT", 00:43:44.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:44.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:44.631 "hdgst": ${hdgst:-false}, 00:43:44.631 "ddgst": ${ddgst:-false} 00:43:44.631 }, 00:43:44.631 "method": "bdev_nvme_attach_controller" 00:43:44.631 } 00:43:44.631 EOF 00:43:44.631 )") 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:44.631 { 00:43:44.631 "params": { 00:43:44.631 "name": "Nvme$subsystem", 00:43:44.631 "trtype": "$TEST_TRANSPORT", 00:43:44.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:44.631 "adrfam": "ipv4", 00:43:44.631 "trsvcid": "$NVMF_PORT", 00:43:44.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:44.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:44.631 "hdgst": ${hdgst:-false}, 00:43:44.631 "ddgst": ${ddgst:-false} 00:43:44.631 }, 00:43:44.631 "method": "bdev_nvme_attach_controller" 00:43:44.631 } 00:43:44.631 EOF 00:43:44.631 )") 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:44.631 "params": { 00:43:44.631 "name": "Nvme0", 00:43:44.631 "trtype": "tcp", 00:43:44.631 "traddr": "10.0.0.2", 00:43:44.631 "adrfam": "ipv4", 00:43:44.631 "trsvcid": "4420", 00:43:44.631 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:44.631 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:44.631 "hdgst": false, 00:43:44.631 "ddgst": false 00:43:44.631 }, 00:43:44.631 "method": "bdev_nvme_attach_controller" 00:43:44.631 },{ 00:43:44.631 "params": { 00:43:44.631 "name": "Nvme1", 00:43:44.631 "trtype": "tcp", 00:43:44.631 "traddr": "10.0.0.2", 00:43:44.631 "adrfam": "ipv4", 00:43:44.631 "trsvcid": "4420", 00:43:44.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:44.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:44.631 "hdgst": false, 00:43:44.631 "ddgst": false 00:43:44.631 }, 00:43:44.631 "method": "bdev_nvme_attach_controller" 00:43:44.631 }' 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # break 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:44.631 01:21:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:44.889 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:44.889 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:44.889 fio-3.35 00:43:44.889 Starting 2 threads 00:43:57.103 00:43:57.103 filename0: (groupid=0, jobs=1): err= 0: pid=3210945: Fri Dec 6 01:21:28 2024 00:43:57.103 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10014msec) 00:43:57.103 slat (nsec): min=9976, max=88718, avg=13935.63, stdev=5639.98 00:43:57.103 clat (usec): min=40857, max=42807, avg=40997.83, stdev=204.67 00:43:57.103 lat (usec): min=40869, max=42856, avg=41011.77, stdev=205.87 00:43:57.103 clat percentiles (usec): 00:43:57.103 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:57.103 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:57.103 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:57.103 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:43:57.103 | 99.99th=[42730] 00:43:57.103 bw ( KiB/s): min= 383, max= 416, per=49.77%, avg=388.75, stdev=11.75, samples=20 00:43:57.103 iops : min= 95, max= 104, avg=97.15, stdev= 2.96, samples=20 00:43:57.103 lat (msec) : 50=100.00% 00:43:57.103 cpu : usr=94.40%, sys=5.11%, ctx=19, majf=0, minf=1636 00:43:57.103 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:57.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:57.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:57.103 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:57.103 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:57.103 filename1: (groupid=0, jobs=1): err= 0: pid=3210946: Fri Dec 6 01:21:28 2024 00:43:57.103 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10015msec) 00:43:57.103 slat (nsec): min=9908, max=43301, avg=13337.63, stdev=4086.51 00:43:57.103 clat (usec): min=40884, max=42129, avg=41003.61, stdev=196.55 00:43:57.103 lat (usec): min=40896, max=42147, avg=41016.94, stdev=197.88 00:43:57.103 clat percentiles (usec): 00:43:57.103 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:57.103 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:57.104 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:57.104 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:57.104 | 99.99th=[42206] 00:43:57.104 bw ( KiB/s): min= 384, max= 416, per=49.77%, avg=388.80, stdev=11.72, samples=20 00:43:57.104 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:43:57.104 lat (msec) : 50=100.00% 00:43:57.104 cpu : usr=94.41%, sys=5.09%, ctx=14, majf=0, minf=1636 00:43:57.104 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:57.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:57.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:57.104 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:57.104 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:57.104 00:43:57.104 Run status group 0 (all jobs): 00:43:57.104 READ: bw=780KiB/s (798kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10014-10015msec 00:43:57.104 ----------------------------------------------------- 00:43:57.104 Suppressions used: 00:43:57.104 count bytes template 00:43:57.104 2 16 /usr/src/fio/parse.c 00:43:57.104 1 8 libtcmalloc_minimal.so 00:43:57.104 1 904 libcrypto.so 00:43:57.104 ----------------------------------------------------- 00:43:57.104 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.104 00:43:57.104 real 0m12.769s 00:43:57.104 user 0m21.421s 00:43:57.104 sys 0m1.496s 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:57.104 01:21:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:57.104 ************************************ 00:43:57.104 END TEST fio_dif_1_multi_subsystems 00:43:57.104 ************************************ 00:43:57.362 01:21:29 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:43:57.362 01:21:29 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:57.362 01:21:29 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:57.362 01:21:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:57.362 ************************************ 00:43:57.362 START TEST fio_dif_rand_params 00:43:57.362 ************************************ 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:57.362 bdev_null0 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:57.362 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:57.363 [2024-12-06 01:21:29.876489] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:57.363 { 00:43:57.363 "params": { 00:43:57.363 "name": "Nvme$subsystem", 00:43:57.363 "trtype": "$TEST_TRANSPORT", 00:43:57.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:57.363 "adrfam": "ipv4", 00:43:57.363 "trsvcid": "$NVMF_PORT", 00:43:57.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:57.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:57.363 "hdgst": ${hdgst:-false}, 00:43:57.363 "ddgst": ${ddgst:-false} 00:43:57.363 }, 00:43:57.363 "method": "bdev_nvme_attach_controller" 00:43:57.363 } 00:43:57.363 EOF 00:43:57.363 )") 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:57.363 "params": { 00:43:57.363 "name": "Nvme0", 00:43:57.363 "trtype": "tcp", 00:43:57.363 "traddr": "10.0.0.2", 00:43:57.363 "adrfam": "ipv4", 00:43:57.363 "trsvcid": "4420", 00:43:57.363 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:57.363 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:57.363 "hdgst": false, 00:43:57.363 "ddgst": false 00:43:57.363 }, 00:43:57.363 "method": "bdev_nvme_attach_controller" 00:43:57.363 }' 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:57.363 01:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:57.621 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:57.621 ... 00:43:57.621 fio-3.35 00:43:57.621 Starting 3 threads 00:44:04.177 00:44:04.178 filename0: (groupid=0, jobs=1): err= 0: pid=3212457: Fri Dec 6 01:21:36 2024 00:44:04.178 read: IOPS=181, BW=22.7MiB/s (23.8MB/s)(114MiB/5007msec) 00:44:04.178 slat (nsec): min=7074, max=56956, avg=27075.09, stdev=5854.96 00:44:04.178 clat (usec): min=6226, max=55859, avg=16453.87, stdev=5274.61 00:44:04.178 lat (usec): min=6250, max=55878, avg=16480.95, stdev=5274.77 00:44:04.178 clat percentiles (usec): 00:44:04.178 | 1.00th=[ 6783], 5.00th=[11076], 10.00th=[13173], 20.00th=[14222], 00:44:04.178 | 30.00th=[15008], 40.00th=[15533], 50.00th=[16319], 60.00th=[16712], 00:44:04.178 | 70.00th=[17171], 80.00th=[17695], 90.00th=[18744], 95.00th=[19792], 00:44:04.178 | 99.00th=[49546], 99.50th=[55313], 99.90th=[55837], 99.95th=[55837], 00:44:04.178 | 99.99th=[55837] 00:44:04.178 bw ( KiB/s): min=21504, max=25856, per=35.10%, avg=23244.80, stdev=1309.80, samples=10 00:44:04.178 iops : min= 168, max= 202, avg=181.60, stdev=10.23, samples=10 00:44:04.178 lat (msec) : 10=3.29%, 20=92.65%, 50=3.07%, 100=0.99% 00:44:04.178 cpu : usr=93.63%, sys=5.29%, ctx=154, majf=0, minf=1634 00:44:04.178 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:04.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.178 issued rwts: total=911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.178 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:04.178 filename0: (groupid=0, jobs=1): err= 0: pid=3212458: Fri Dec 6 01:21:36 2024 00:44:04.178 read: IOPS=169, BW=21.2MiB/s (22.2MB/s)(107MiB/5045msec) 00:44:04.178 slat (nsec): min=6190, max=45277, avg=21036.17, stdev=4762.52 00:44:04.178 clat (usec): min=8410, max=55365, avg=17647.82, stdev=5676.31 00:44:04.178 lat (usec): min=8445, max=55385, avg=17668.85, stdev=5676.11 00:44:04.178 clat percentiles (usec): 00:44:04.178 | 1.00th=[ 9896], 5.00th=[13173], 10.00th=[13960], 20.00th=[15008], 00:44:04.178 | 30.00th=[15795], 40.00th=[16450], 50.00th=[16909], 60.00th=[17433], 00:44:04.178 | 70.00th=[18220], 80.00th=[19006], 90.00th=[20317], 95.00th=[21103], 00:44:04.178 | 99.00th=[52167], 99.50th=[52691], 99.90th=[55313], 99.95th=[55313], 00:44:04.178 | 99.99th=[55313] 00:44:04.178 bw ( KiB/s): min=14592, max=24064, per=32.94%, avg=21811.20, stdev=2714.08, samples=10 00:44:04.178 iops : min= 114, max= 188, avg=170.40, stdev=21.20, samples=10 00:44:04.178 lat (msec) : 10=1.05%, 20=87.12%, 50=10.54%, 100=1.29% 00:44:04.178 cpu : usr=94.07%, sys=5.37%, ctx=8, majf=0, minf=1632 00:44:04.178 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:04.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.178 issued rwts: total=854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.178 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:04.178 filename0: (groupid=0, jobs=1): err= 0: pid=3212459: Fri Dec 6 01:21:36 2024 00:44:04.178 read: IOPS=167, BW=20.9MiB/s (22.0MB/s)(106MiB/5045msec) 00:44:04.178 slat (nsec): min=7649, max=48899, avg=21075.63, stdev=5018.43 00:44:04.178 clat (usec): min=6222, max=57059, avg=17833.72, stdev=5742.22 00:44:04.178 lat (usec): min=6233, max=57077, avg=17854.80, stdev=5742.08 00:44:04.178 clat percentiles (usec): 00:44:04.178 | 1.00th=[11731], 5.00th=[13304], 10.00th=[13960], 20.00th=[15139], 00:44:04.178 | 30.00th=[16057], 40.00th=[16909], 50.00th=[17171], 60.00th=[17695], 00:44:04.178 | 70.00th=[18482], 80.00th=[19006], 90.00th=[20055], 95.00th=[20841], 00:44:04.178 | 99.00th=[51643], 99.50th=[52691], 99.90th=[56886], 99.95th=[56886], 00:44:04.178 | 99.99th=[56886] 00:44:04.178 bw ( KiB/s): min=14877, max=25344, per=32.59%, avg=21583.70, stdev=2690.58, samples=10 00:44:04.178 iops : min= 116, max= 198, avg=168.60, stdev=21.08, samples=10 00:44:04.178 lat (msec) : 10=0.71%, 20=89.11%, 50=8.64%, 100=1.54% 00:44:04.178 cpu : usr=94.45%, sys=5.00%, ctx=6, majf=0, minf=1634 00:44:04.178 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:04.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.178 issued rwts: total=845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.178 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:04.178 00:44:04.178 Run status group 0 (all jobs): 00:44:04.178 READ: bw=64.7MiB/s (67.8MB/s), 20.9MiB/s-22.7MiB/s (22.0MB/s-23.8MB/s), io=326MiB (342MB), run=5007-5045msec 00:44:04.746 ----------------------------------------------------- 00:44:04.746 Suppressions used: 00:44:04.746 count bytes template 00:44:04.746 5 44 /usr/src/fio/parse.c 00:44:04.746 1 8 libtcmalloc_minimal.so 00:44:04.746 1 904 libcrypto.so 00:44:04.746 ----------------------------------------------------- 00:44:04.746 00:44:04.746 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:44:04.746 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:04.746 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:04.746 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:04.746 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:04.746 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:04.747 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:04.747 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:05.006 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.006 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:05.006 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.006 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:05.006 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.006 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:44:05.006 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:44:05.006 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:44:05.006 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:44:05.006 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:44:05.006 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:44:05.006 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:44:05.006 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:05.006 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:05.006 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:05.006 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:05.006 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:05.007 bdev_null0 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:05.007 [2024-12-06 01:21:37.492550] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:05.007 bdev_null1 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:05.007 bdev_null2 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:05.007 { 00:44:05.007 "params": { 00:44:05.007 "name": "Nvme$subsystem", 00:44:05.007 "trtype": "$TEST_TRANSPORT", 00:44:05.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:05.007 "adrfam": "ipv4", 00:44:05.007 "trsvcid": "$NVMF_PORT", 00:44:05.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:05.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:05.007 "hdgst": ${hdgst:-false}, 00:44:05.007 "ddgst": ${ddgst:-false} 00:44:05.007 }, 00:44:05.007 "method": "bdev_nvme_attach_controller" 00:44:05.007 } 00:44:05.007 EOF 00:44:05.007 )") 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:05.007 01:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:05.007 { 00:44:05.007 "params": { 00:44:05.007 "name": "Nvme$subsystem", 00:44:05.008 "trtype": "$TEST_TRANSPORT", 00:44:05.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:05.008 "adrfam": "ipv4", 00:44:05.008 "trsvcid": "$NVMF_PORT", 00:44:05.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:05.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:05.008 "hdgst": ${hdgst:-false}, 00:44:05.008 "ddgst": ${ddgst:-false} 00:44:05.008 }, 00:44:05.008 "method": "bdev_nvme_attach_controller" 00:44:05.008 } 00:44:05.008 EOF 00:44:05.008 )") 00:44:05.008 01:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:05.008 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:05.008 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:05.008 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:05.008 01:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:05.008 01:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:05.008 { 00:44:05.008 "params": { 00:44:05.008 "name": "Nvme$subsystem", 00:44:05.008 "trtype": "$TEST_TRANSPORT", 00:44:05.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:05.008 "adrfam": "ipv4", 00:44:05.008 "trsvcid": "$NVMF_PORT", 00:44:05.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:05.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:05.008 "hdgst": ${hdgst:-false}, 00:44:05.008 "ddgst": ${ddgst:-false} 00:44:05.008 }, 00:44:05.008 "method": "bdev_nvme_attach_controller" 00:44:05.008 } 00:44:05.008 EOF 00:44:05.008 )") 00:44:05.008 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:05.008 01:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:05.008 01:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:05.008 01:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:44:05.008 01:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:44:05.008 01:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:05.008 "params": { 00:44:05.008 "name": "Nvme0", 00:44:05.008 "trtype": "tcp", 00:44:05.008 "traddr": "10.0.0.2", 00:44:05.008 "adrfam": "ipv4", 00:44:05.008 "trsvcid": "4420", 00:44:05.008 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:05.008 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:05.008 "hdgst": false, 00:44:05.008 "ddgst": false 00:44:05.008 }, 00:44:05.008 "method": "bdev_nvme_attach_controller" 00:44:05.008 },{ 00:44:05.008 "params": { 00:44:05.008 "name": "Nvme1", 00:44:05.008 "trtype": "tcp", 00:44:05.008 "traddr": "10.0.0.2", 00:44:05.008 "adrfam": "ipv4", 00:44:05.008 "trsvcid": "4420", 00:44:05.008 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:05.008 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:05.008 "hdgst": false, 00:44:05.008 "ddgst": false 00:44:05.008 }, 00:44:05.008 "method": "bdev_nvme_attach_controller" 00:44:05.008 },{ 00:44:05.008 "params": { 00:44:05.008 "name": "Nvme2", 00:44:05.008 "trtype": "tcp", 00:44:05.008 "traddr": "10.0.0.2", 00:44:05.008 "adrfam": "ipv4", 00:44:05.008 "trsvcid": "4420", 00:44:05.008 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:44:05.008 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:44:05.008 "hdgst": false, 00:44:05.008 "ddgst": false 00:44:05.008 }, 00:44:05.008 "method": "bdev_nvme_attach_controller" 00:44:05.008 }' 00:44:05.008 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:05.008 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:05.008 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:44:05.008 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:05.008 01:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:05.267 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:05.267 ... 00:44:05.267 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:05.267 ... 00:44:05.267 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:05.267 ... 00:44:05.267 fio-3.35 00:44:05.267 Starting 24 threads 00:44:17.469 00:44:17.469 filename0: (groupid=0, jobs=1): err= 0: pid=3213443: Fri Dec 6 01:21:49 2024 00:44:17.469 read: IOPS=345, BW=1384KiB/s (1417kB/s)(13.6MiB/10083msec) 00:44:17.469 slat (nsec): min=8180, max=97406, avg=41684.00, stdev=11606.41 00:44:17.469 clat (msec): min=34, max=130, avg=45.87, stdev= 6.83 00:44:17.469 lat (msec): min=34, max=130, avg=45.91, stdev= 6.83 00:44:17.469 clat percentiles (msec): 00:44:17.469 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:17.469 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:44:17.469 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:44:17.469 | 99.00th=[ 50], 99.50th=[ 100], 99.90th=[ 130], 99.95th=[ 130], 00:44:17.469 | 99.99th=[ 131] 00:44:17.469 bw ( KiB/s): min= 1152, max= 1536, per=4.16%, avg=1388.80, stdev=75.15, samples=20 00:44:17.469 iops : min= 288, max= 384, avg=347.20, stdev=18.79, samples=20 00:44:17.469 lat (msec) : 50=99.03%, 100=0.52%, 250=0.46% 00:44:17.469 cpu : usr=98.09%, sys=1.40%, ctx=24, majf=0, minf=1635 00:44:17.469 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:17.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.469 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.469 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.469 filename0: (groupid=0, jobs=1): err= 0: pid=3213444: Fri Dec 6 01:21:49 2024 00:44:17.469 read: IOPS=350, BW=1403KiB/s (1437kB/s)(13.9MiB/10124msec) 00:44:17.469 slat (nsec): min=5941, max=85937, avg=37428.99, stdev=11256.52 00:44:17.469 clat (msec): min=6, max=129, avg=45.28, stdev= 6.63 00:44:17.469 lat (msec): min=6, max=129, avg=45.32, stdev= 6.63 00:44:17.469 clat percentiles (msec): 00:44:17.469 | 1.00th=[ 27], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:17.469 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:44:17.469 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:44:17.469 | 99.00th=[ 49], 99.50th=[ 50], 99.90th=[ 130], 99.95th=[ 130], 00:44:17.469 | 99.99th=[ 130] 00:44:17.469 bw ( KiB/s): min= 1280, max= 1664, per=4.24%, avg=1414.40, stdev=77.42, samples=20 00:44:17.469 iops : min= 320, max= 416, avg=353.60, stdev=19.35, samples=20 00:44:17.469 lat (msec) : 10=0.45%, 20=0.45%, 50=98.65%, 250=0.45% 00:44:17.469 cpu : usr=95.97%, sys=2.28%, ctx=169, majf=0, minf=1634 00:44:17.469 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:17.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.469 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.469 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.469 filename0: (groupid=0, jobs=1): err= 0: pid=3213445: Fri Dec 6 01:21:49 2024 00:44:17.469 read: IOPS=346, BW=1387KiB/s (1420kB/s)(13.7MiB/10107msec) 00:44:17.469 slat (nsec): min=5024, max=60426, avg=28596.77, stdev=9386.27 00:44:17.469 clat (msec): min=24, max=130, avg=45.89, stdev= 6.07 00:44:17.469 lat (msec): min=24, max=130, avg=45.92, stdev= 6.07 00:44:17.469 clat percentiles (msec): 00:44:17.469 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:17.469 | 30.00th=[ 45], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 46], 00:44:17.469 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:44:17.469 | 99.00th=[ 50], 99.50th=[ 79], 99.90th=[ 128], 99.95th=[ 131], 00:44:17.469 | 99.99th=[ 131] 00:44:17.469 bw ( KiB/s): min= 1280, max= 1536, per=4.18%, avg=1394.90, stdev=57.19, samples=20 00:44:17.469 iops : min= 320, max= 384, avg=348.70, stdev=14.29, samples=20 00:44:17.469 lat (msec) : 50=99.03%, 100=0.51%, 250=0.46% 00:44:17.469 cpu : usr=97.38%, sys=1.63%, ctx=79, majf=0, minf=1635 00:44:17.469 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:17.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.469 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.469 issued rwts: total=3504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.469 filename0: (groupid=0, jobs=1): err= 0: pid=3213446: Fri Dec 6 01:21:49 2024 00:44:17.470 read: IOPS=351, BW=1407KiB/s (1440kB/s)(13.8MiB/10010msec) 00:44:17.470 slat (usec): min=6, max=112, avg=42.74, stdev=25.28 00:44:17.470 clat (usec): min=25844, max=49144, avg=45126.12, stdev=1642.42 00:44:17.470 lat (usec): min=25879, max=49189, avg=45168.86, stdev=1637.92 00:44:17.470 clat percentiles (usec): 00:44:17.470 | 1.00th=[42730], 5.00th=[43254], 10.00th=[43779], 20.00th=[44303], 00:44:17.470 | 30.00th=[44827], 40.00th=[44827], 50.00th=[45351], 60.00th=[45351], 00:44:17.470 | 70.00th=[45876], 80.00th=[45876], 90.00th=[46400], 95.00th=[46924], 00:44:17.470 | 99.00th=[48497], 99.50th=[48497], 99.90th=[49021], 99.95th=[49021], 00:44:17.470 | 99.99th=[49021] 00:44:17.470 bw ( KiB/s): min= 1280, max= 1536, per=4.22%, avg=1408.00, stdev=42.67, samples=19 00:44:17.470 iops : min= 320, max= 384, avg=352.00, stdev=10.67, samples=19 00:44:17.470 lat (msec) : 50=100.00% 00:44:17.470 cpu : usr=97.75%, sys=1.47%, ctx=71, majf=0, minf=1636 00:44:17.470 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:17.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.470 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.470 issued rwts: total=3520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.470 filename0: (groupid=0, jobs=1): err= 0: pid=3213447: Fri Dec 6 01:21:49 2024 00:44:17.470 read: IOPS=345, BW=1384KiB/s (1417kB/s)(13.6MiB/10082msec) 00:44:17.470 slat (usec): min=13, max=117, avg=68.40, stdev=11.81 00:44:17.470 clat (msec): min=42, max=137, avg=45.59, stdev= 6.56 00:44:17.470 lat (msec): min=42, max=137, avg=45.66, stdev= 6.56 00:44:17.470 clat percentiles (msec): 00:44:17.470 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:44:17.470 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:44:17.470 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 47], 00:44:17.470 | 99.00th=[ 50], 99.50th=[ 89], 99.90th=[ 130], 99.95th=[ 138], 00:44:17.470 | 99.99th=[ 138] 00:44:17.470 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1388.80, stdev=62.64, samples=20 00:44:17.470 iops : min= 320, max= 384, avg=347.20, stdev=15.66, samples=20 00:44:17.470 lat (msec) : 50=99.08%, 100=0.46%, 250=0.46% 00:44:17.470 cpu : usr=98.06%, sys=1.39%, ctx=14, majf=0, minf=1633 00:44:17.470 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:17.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.470 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.470 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.470 filename0: (groupid=0, jobs=1): err= 0: pid=3213448: Fri Dec 6 01:21:49 2024 00:44:17.470 read: IOPS=353, BW=1414KiB/s (1448kB/s)(13.9MiB/10091msec) 00:44:17.470 slat (nsec): min=7975, max=91828, avg=24196.26, stdev=12408.61 00:44:17.470 clat (msec): min=6, max=104, avg=45.03, stdev= 5.91 00:44:17.470 lat (msec): min=6, max=104, avg=45.05, stdev= 5.91 00:44:17.470 clat percentiles (msec): 00:44:17.470 | 1.00th=[ 17], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:17.470 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:44:17.470 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 47], 00:44:17.470 | 99.00th=[ 49], 99.50th=[ 58], 99.90th=[ 105], 99.95th=[ 105], 00:44:17.470 | 99.99th=[ 105] 00:44:17.470 bw ( KiB/s): min= 1280, max= 1792, per=4.26%, avg=1420.80, stdev=91.93, samples=20 00:44:17.470 iops : min= 320, max= 448, avg=355.20, stdev=22.98, samples=20 00:44:17.470 lat (msec) : 10=0.78%, 20=0.56%, 50=98.15%, 100=0.06%, 250=0.45% 00:44:17.470 cpu : usr=98.35%, sys=1.13%, ctx=19, majf=0, minf=1636 00:44:17.470 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:17.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.470 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.470 issued rwts: total=3568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.470 filename0: (groupid=0, jobs=1): err= 0: pid=3213449: Fri Dec 6 01:21:49 2024 00:44:17.470 read: IOPS=346, BW=1384KiB/s (1417kB/s)(13.6MiB/10079msec) 00:44:17.470 slat (nsec): min=12593, max=91120, avg=33045.57, stdev=11334.38 00:44:17.470 clat (msec): min=25, max=127, avg=45.93, stdev= 6.66 00:44:17.470 lat (msec): min=25, max=127, avg=45.96, stdev= 6.66 00:44:17.470 clat percentiles (msec): 00:44:17.470 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:17.470 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:44:17.470 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:44:17.470 | 99.00th=[ 50], 99.50th=[ 99], 99.90th=[ 128], 99.95th=[ 128], 00:44:17.470 | 99.99th=[ 128] 00:44:17.470 bw ( KiB/s): min= 1277, max= 1408, per=4.16%, avg=1388.65, stdev=47.26, samples=20 00:44:17.470 iops : min= 319, max= 352, avg=347.15, stdev=11.85, samples=20 00:44:17.470 lat (msec) : 50=99.03%, 100=0.52%, 250=0.46% 00:44:17.470 cpu : usr=97.10%, sys=1.90%, ctx=87, majf=0, minf=1633 00:44:17.470 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:17.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.470 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.470 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.470 filename0: (groupid=0, jobs=1): err= 0: pid=3213450: Fri Dec 6 01:21:49 2024 00:44:17.470 read: IOPS=347, BW=1388KiB/s (1421kB/s)(13.7MiB/10097msec) 00:44:17.470 slat (nsec): min=8110, max=80980, avg=33355.71, stdev=8993.26 00:44:17.470 clat (msec): min=43, max=127, avg=45.80, stdev= 5.86 00:44:17.470 lat (msec): min=43, max=127, avg=45.84, stdev= 5.86 00:44:17.470 clat percentiles (msec): 00:44:17.470 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:17.470 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:44:17.470 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:44:17.470 | 99.00th=[ 50], 99.50th=[ 71], 99.90th=[ 128], 99.95th=[ 128], 00:44:17.470 | 99.99th=[ 128] 00:44:17.470 bw ( KiB/s): min= 1280, max= 1536, per=4.19%, avg=1395.20, stdev=57.24, samples=20 00:44:17.470 iops : min= 320, max= 384, avg=348.80, stdev=14.31, samples=20 00:44:17.470 lat (msec) : 50=99.09%, 100=0.46%, 250=0.46% 00:44:17.470 cpu : usr=97.97%, sys=1.47%, ctx=52, majf=0, minf=1633 00:44:17.470 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:17.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.470 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.470 issued rwts: total=3504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.470 filename1: (groupid=0, jobs=1): err= 0: pid=3213451: Fri Dec 6 01:21:49 2024 00:44:17.470 read: IOPS=347, BW=1388KiB/s (1422kB/s)(13.7MiB/10095msec) 00:44:17.470 slat (nsec): min=4987, max=98544, avg=32880.12, stdev=11903.97 00:44:17.470 clat (msec): min=24, max=127, avg=45.80, stdev= 6.01 00:44:17.470 lat (msec): min=24, max=127, avg=45.83, stdev= 6.01 00:44:17.470 clat percentiles (msec): 00:44:17.470 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:17.470 | 30.00th=[ 45], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 46], 00:44:17.470 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:44:17.470 | 99.00th=[ 63], 99.50th=[ 96], 99.90th=[ 128], 99.95th=[ 128], 00:44:17.470 | 99.99th=[ 128] 00:44:17.470 bw ( KiB/s): min= 1280, max= 1536, per=4.19%, avg=1395.20, stdev=57.24, samples=20 00:44:17.470 iops : min= 320, max= 384, avg=348.80, stdev=14.31, samples=20 00:44:17.470 lat (msec) : 50=98.97%, 100=0.57%, 250=0.46% 00:44:17.470 cpu : usr=98.31%, sys=1.18%, ctx=35, majf=0, minf=1633 00:44:17.470 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:44:17.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.470 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.470 issued rwts: total=3504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.470 filename1: (groupid=0, jobs=1): err= 0: pid=3213452: Fri Dec 6 01:21:49 2024 00:44:17.470 read: IOPS=346, BW=1385KiB/s (1418kB/s)(13.6MiB/10075msec) 00:44:17.470 slat (nsec): min=8243, max=74220, avg=35401.16, stdev=9557.63 00:44:17.470 clat (msec): min=34, max=130, avg=45.89, stdev= 6.60 00:44:17.470 lat (msec): min=34, max=130, avg=45.92, stdev= 6.60 00:44:17.470 clat percentiles (msec): 00:44:17.470 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:17.470 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:44:17.470 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:44:17.470 | 99.00th=[ 50], 99.50th=[ 103], 99.90th=[ 131], 99.95th=[ 131], 00:44:17.470 | 99.99th=[ 131] 00:44:17.470 bw ( KiB/s): min= 1280, max= 1408, per=4.16%, avg=1388.90, stdev=46.65, samples=20 00:44:17.470 iops : min= 320, max= 352, avg=347.20, stdev=11.72, samples=20 00:44:17.470 lat (msec) : 50=99.08%, 100=0.40%, 250=0.52% 00:44:17.470 cpu : usr=98.18%, sys=1.33%, ctx=17, majf=0, minf=1631 00:44:17.470 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:17.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.470 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.470 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.470 filename1: (groupid=0, jobs=1): err= 0: pid=3213453: Fri Dec 6 01:21:49 2024 00:44:17.470 read: IOPS=368, BW=1473KiB/s (1509kB/s)(14.5MiB/10078msec) 00:44:17.470 slat (nsec): min=8291, max=79542, avg=27141.90, stdev=11270.29 00:44:17.470 clat (msec): min=20, max=127, avg=43.21, stdev= 9.57 00:44:17.470 lat (msec): min=20, max=127, avg=43.23, stdev= 9.58 00:44:17.470 clat percentiles (msec): 00:44:17.470 | 1.00th=[ 28], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 39], 00:44:17.470 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:44:17.470 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 47], 00:44:17.470 | 99.00th=[ 64], 99.50th=[ 116], 99.90th=[ 128], 99.95th=[ 128], 00:44:17.470 | 99.99th=[ 128] 00:44:17.470 bw ( KiB/s): min= 1154, max= 1888, per=4.43%, avg=1478.50, stdev=178.69, samples=20 00:44:17.470 iops : min= 288, max= 472, avg=369.60, stdev=44.72, samples=20 00:44:17.470 lat (msec) : 50=97.17%, 100=1.97%, 250=0.86% 00:44:17.470 cpu : usr=97.09%, sys=1.80%, ctx=202, majf=0, minf=1633 00:44:17.471 IO depths : 1=3.6%, 2=8.1%, 4=19.7%, 8=59.4%, 16=9.2%, 32=0.0%, >=64=0.0% 00:44:17.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.471 complete : 0=0.0%, 4=92.7%, 8=1.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.471 issued rwts: total=3712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.471 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.471 filename1: (groupid=0, jobs=1): err= 0: pid=3213454: Fri Dec 6 01:21:49 2024 00:44:17.471 read: IOPS=346, BW=1387KiB/s (1420kB/s)(13.7MiB/10107msec) 00:44:17.471 slat (usec): min=11, max=105, avg=41.96, stdev=12.46 00:44:17.471 clat (msec): min=39, max=137, avg=45.74, stdev= 6.13 00:44:17.471 lat (msec): min=39, max=137, avg=45.78, stdev= 6.13 00:44:17.471 clat percentiles (msec): 00:44:17.471 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:17.471 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:44:17.471 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:44:17.471 | 99.00th=[ 50], 99.50th=[ 73], 99.90th=[ 130], 99.95th=[ 138], 00:44:17.471 | 99.99th=[ 138] 00:44:17.471 bw ( KiB/s): min= 1280, max= 1536, per=4.19%, avg=1395.20, stdev=57.24, samples=20 00:44:17.471 iops : min= 320, max= 384, avg=348.80, stdev=14.31, samples=20 00:44:17.471 lat (msec) : 50=99.09%, 100=0.46%, 250=0.46% 00:44:17.471 cpu : usr=97.60%, sys=1.62%, ctx=66, majf=0, minf=1633 00:44:17.471 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:17.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.471 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.471 issued rwts: total=3504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.471 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.471 filename1: (groupid=0, jobs=1): err= 0: pid=3213455: Fri Dec 6 01:21:49 2024 00:44:17.471 read: IOPS=346, BW=1385KiB/s (1418kB/s)(13.6MiB/10077msec) 00:44:17.471 slat (nsec): min=12002, max=95877, avg=40830.53, stdev=10982.70 00:44:17.471 clat (msec): min=34, max=130, avg=45.84, stdev= 6.65 00:44:17.471 lat (msec): min=34, max=130, avg=45.88, stdev= 6.65 00:44:17.471 clat percentiles (msec): 00:44:17.471 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:17.471 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:44:17.471 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:44:17.471 | 99.00th=[ 50], 99.50th=[ 104], 99.90th=[ 130], 99.95th=[ 131], 00:44:17.471 | 99.99th=[ 131] 00:44:17.471 bw ( KiB/s): min= 1280, max= 1408, per=4.16%, avg=1388.80, stdev=46.89, samples=20 00:44:17.471 iops : min= 320, max= 352, avg=347.20, stdev=11.72, samples=20 00:44:17.471 lat (msec) : 50=99.08%, 100=0.40%, 250=0.52% 00:44:17.471 cpu : usr=98.18%, sys=1.31%, ctx=15, majf=0, minf=1633 00:44:17.471 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:17.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.471 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.471 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.471 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.471 filename1: (groupid=0, jobs=1): err= 0: pid=3213456: Fri Dec 6 01:21:49 2024 00:44:17.471 read: IOPS=351, BW=1406KiB/s (1439kB/s)(13.8MiB/10028msec) 00:44:17.471 slat (nsec): min=11935, max=80987, avg=31320.52, stdev=10992.01 00:44:17.471 clat (msec): min=24, max=104, avg=45.26, stdev= 7.05 00:44:17.471 lat (msec): min=24, max=104, avg=45.29, stdev= 7.05 00:44:17.471 clat percentiles (msec): 00:44:17.471 | 1.00th=[ 29], 5.00th=[ 35], 10.00th=[ 44], 20.00th=[ 45], 00:44:17.471 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:44:17.471 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:44:17.471 | 99.00th=[ 74], 99.50th=[ 95], 99.90th=[ 105], 99.95th=[ 105], 00:44:17.471 | 99.99th=[ 105] 00:44:17.471 bw ( KiB/s): min= 1152, max= 1776, per=4.21%, avg=1403.20, stdev=109.52, samples=20 00:44:17.471 iops : min= 288, max= 444, avg=350.80, stdev=27.38, samples=20 00:44:17.471 lat (msec) : 50=96.42%, 100=3.12%, 250=0.45% 00:44:17.471 cpu : usr=98.39%, sys=1.10%, ctx=14, majf=0, minf=1631 00:44:17.471 IO depths : 1=4.2%, 2=9.8%, 4=23.0%, 8=54.6%, 16=8.4%, 32=0.0%, >=64=0.0% 00:44:17.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.471 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.471 issued rwts: total=3524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.471 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.471 filename1: (groupid=0, jobs=1): err= 0: pid=3213457: Fri Dec 6 01:21:49 2024 00:44:17.471 read: IOPS=347, BW=1391KiB/s (1424kB/s)(13.8MiB/10122msec) 00:44:17.471 slat (usec): min=4, max=108, avg=44.20, stdev=14.31 00:44:17.471 clat (msec): min=28, max=129, avg=45.62, stdev= 6.06 00:44:17.471 lat (msec): min=28, max=129, avg=45.67, stdev= 6.05 00:44:17.471 clat percentiles (msec): 00:44:17.471 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:44:17.471 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:44:17.471 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:44:17.471 | 99.00th=[ 50], 99.50th=[ 68], 99.90th=[ 130], 99.95th=[ 130], 00:44:17.471 | 99.99th=[ 130] 00:44:17.471 bw ( KiB/s): min= 1280, max= 1536, per=4.20%, avg=1401.60, stdev=50.44, samples=20 00:44:17.471 iops : min= 320, max= 384, avg=350.40, stdev=12.61, samples=20 00:44:17.471 lat (msec) : 50=99.09%, 100=0.45%, 250=0.45% 00:44:17.471 cpu : usr=98.41%, sys=1.08%, ctx=21, majf=0, minf=1633 00:44:17.471 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:17.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.471 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.471 issued rwts: total=3520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.471 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.471 filename1: (groupid=0, jobs=1): err= 0: pid=3213458: Fri Dec 6 01:21:49 2024 00:44:17.471 read: IOPS=346, BW=1387KiB/s (1420kB/s)(13.7MiB/10105msec) 00:44:17.471 slat (usec): min=5, max=113, avg=66.21, stdev=11.40 00:44:17.471 clat (msec): min=25, max=129, avg=45.53, stdev= 6.03 00:44:17.471 lat (msec): min=25, max=129, avg=45.59, stdev= 6.03 00:44:17.471 clat percentiles (msec): 00:44:17.471 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:44:17.471 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:44:17.471 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 47], 00:44:17.471 | 99.00th=[ 64], 99.50th=[ 77], 99.90th=[ 128], 99.95th=[ 129], 00:44:17.471 | 99.99th=[ 130] 00:44:17.471 bw ( KiB/s): min= 1280, max= 1536, per=4.19%, avg=1395.20, stdev=57.24, samples=20 00:44:17.471 iops : min= 320, max= 384, avg=348.80, stdev=14.31, samples=20 00:44:17.471 lat (msec) : 50=98.97%, 100=0.57%, 250=0.46% 00:44:17.471 cpu : usr=97.97%, sys=1.46%, ctx=14, majf=0, minf=1631 00:44:17.471 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:17.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.471 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.471 issued rwts: total=3504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.471 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.471 filename2: (groupid=0, jobs=1): err= 0: pid=3213459: Fri Dec 6 01:21:49 2024 00:44:17.471 read: IOPS=346, BW=1385KiB/s (1418kB/s)(13.6MiB/10073msec) 00:44:17.471 slat (nsec): min=11937, max=88312, avg=38833.18, stdev=10771.31 00:44:17.471 clat (msec): min=33, max=130, avg=45.84, stdev= 6.52 00:44:17.471 lat (msec): min=33, max=130, avg=45.88, stdev= 6.52 00:44:17.471 clat percentiles (msec): 00:44:17.471 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:17.471 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:44:17.471 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:44:17.471 | 99.00th=[ 50], 99.50th=[ 100], 99.90th=[ 130], 99.95th=[ 131], 00:44:17.471 | 99.99th=[ 131] 00:44:17.471 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1388.90, stdev=62.46, samples=20 00:44:17.471 iops : min= 320, max= 384, avg=347.20, stdev=15.66, samples=20 00:44:17.471 lat (msec) : 50=99.08%, 100=0.46%, 250=0.46% 00:44:17.471 cpu : usr=98.01%, sys=1.26%, ctx=54, majf=0, minf=1631 00:44:17.471 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:17.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.471 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.471 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.471 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.471 filename2: (groupid=0, jobs=1): err= 0: pid=3213460: Fri Dec 6 01:21:49 2024 00:44:17.471 read: IOPS=347, BW=1391KiB/s (1424kB/s)(13.8MiB/10124msec) 00:44:17.471 slat (usec): min=7, max=105, avg=40.50, stdev=14.28 00:44:17.471 clat (msec): min=28, max=129, avg=45.67, stdev= 6.06 00:44:17.471 lat (msec): min=28, max=129, avg=45.72, stdev= 6.06 00:44:17.471 clat percentiles (msec): 00:44:17.471 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:17.471 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:44:17.471 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:44:17.471 | 99.00th=[ 50], 99.50th=[ 70], 99.90th=[ 130], 99.95th=[ 130], 00:44:17.471 | 99.99th=[ 130] 00:44:17.471 bw ( KiB/s): min= 1280, max= 1536, per=4.20%, avg=1401.60, stdev=50.44, samples=20 00:44:17.471 iops : min= 320, max= 384, avg=350.40, stdev=12.61, samples=20 00:44:17.471 lat (msec) : 50=99.09%, 100=0.45%, 250=0.45% 00:44:17.471 cpu : usr=98.03%, sys=1.45%, ctx=16, majf=0, minf=1634 00:44:17.471 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:17.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.471 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.471 issued rwts: total=3520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.471 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.471 filename2: (groupid=0, jobs=1): err= 0: pid=3213461: Fri Dec 6 01:21:49 2024 00:44:17.471 read: IOPS=346, BW=1386KiB/s (1419kB/s)(13.6MiB/10068msec) 00:44:17.471 slat (usec): min=14, max=116, avg=46.22, stdev=18.75 00:44:17.471 clat (msec): min=42, max=130, avg=45.74, stdev= 6.34 00:44:17.471 lat (msec): min=42, max=130, avg=45.79, stdev= 6.34 00:44:17.471 clat percentiles (msec): 00:44:17.471 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:44:17.472 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:44:17.472 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 47], 00:44:17.472 | 99.00th=[ 50], 99.50th=[ 84], 99.90th=[ 130], 99.95th=[ 131], 00:44:17.472 | 99.99th=[ 131] 00:44:17.472 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1388.80, stdev=62.64, samples=20 00:44:17.472 iops : min= 320, max= 384, avg=347.20, stdev=15.66, samples=20 00:44:17.472 lat (msec) : 50=99.08%, 100=0.46%, 250=0.46% 00:44:17.472 cpu : usr=96.96%, sys=1.89%, ctx=159, majf=0, minf=1633 00:44:17.472 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:17.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.472 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.472 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.472 filename2: (groupid=0, jobs=1): err= 0: pid=3213462: Fri Dec 6 01:21:49 2024 00:44:17.472 read: IOPS=346, BW=1385KiB/s (1418kB/s)(13.6MiB/10029msec) 00:44:17.472 slat (nsec): min=10073, max=75223, avg=28864.93, stdev=8321.31 00:44:17.472 clat (msec): min=31, max=116, avg=45.96, stdev= 6.32 00:44:17.472 lat (msec): min=31, max=116, avg=45.99, stdev= 6.32 00:44:17.472 clat percentiles (msec): 00:44:17.472 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:17.472 | 30.00th=[ 45], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 46], 00:44:17.472 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 47], 00:44:17.472 | 99.00th=[ 52], 99.50th=[ 105], 99.90th=[ 116], 99.95th=[ 116], 00:44:17.472 | 99.99th=[ 116] 00:44:17.472 bw ( KiB/s): min= 1152, max= 1536, per=4.15%, avg=1382.50, stdev=88.80, samples=20 00:44:17.472 iops : min= 288, max= 384, avg=345.60, stdev=22.27, samples=20 00:44:17.472 lat (msec) : 50=98.96%, 100=0.12%, 250=0.92% 00:44:17.472 cpu : usr=98.36%, sys=1.14%, ctx=15, majf=0, minf=1633 00:44:17.472 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:44:17.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.472 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.472 issued rwts: total=3472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.472 filename2: (groupid=0, jobs=1): err= 0: pid=3213463: Fri Dec 6 01:21:49 2024 00:44:17.472 read: IOPS=347, BW=1388KiB/s (1421kB/s)(13.6MiB/10051msec) 00:44:17.472 slat (nsec): min=13068, max=92852, avg=36924.63, stdev=16945.54 00:44:17.472 clat (msec): min=32, max=105, avg=45.78, stdev= 5.31 00:44:17.472 lat (msec): min=32, max=106, avg=45.82, stdev= 5.30 00:44:17.472 clat percentiles (msec): 00:44:17.472 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:44:17.472 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:44:17.472 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 47], 00:44:17.472 | 99.00th=[ 50], 99.50th=[ 105], 99.90th=[ 105], 99.95th=[ 106], 00:44:17.472 | 99.99th=[ 107] 00:44:17.472 bw ( KiB/s): min= 1280, max= 1408, per=4.16%, avg=1388.80, stdev=46.89, samples=20 00:44:17.472 iops : min= 320, max= 352, avg=347.20, stdev=11.72, samples=20 00:44:17.472 lat (msec) : 50=99.08%, 100=0.40%, 250=0.52% 00:44:17.472 cpu : usr=98.21%, sys=1.28%, ctx=19, majf=0, minf=1632 00:44:17.472 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:17.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.472 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.472 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.472 filename2: (groupid=0, jobs=1): err= 0: pid=3213464: Fri Dec 6 01:21:49 2024 00:44:17.472 read: IOPS=347, BW=1390KiB/s (1424kB/s)(13.8MiB/10128msec) 00:44:17.472 slat (usec): min=8, max=104, avg=33.87, stdev=13.79 00:44:17.472 clat (msec): min=28, max=129, avg=45.75, stdev= 6.13 00:44:17.472 lat (msec): min=28, max=129, avg=45.78, stdev= 6.13 00:44:17.472 clat percentiles (msec): 00:44:17.472 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:17.472 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:44:17.472 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:44:17.472 | 99.00th=[ 50], 99.50th=[ 74], 99.90th=[ 130], 99.95th=[ 130], 00:44:17.472 | 99.99th=[ 130] 00:44:17.472 bw ( KiB/s): min= 1280, max= 1536, per=4.20%, avg=1401.60, stdev=50.44, samples=20 00:44:17.472 iops : min= 320, max= 384, avg=350.40, stdev=12.61, samples=20 00:44:17.472 lat (msec) : 50=99.09%, 100=0.45%, 250=0.45% 00:44:17.472 cpu : usr=96.95%, sys=1.80%, ctx=115, majf=0, minf=1632 00:44:17.472 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:17.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.472 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.472 issued rwts: total=3520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.472 filename2: (groupid=0, jobs=1): err= 0: pid=3213465: Fri Dec 6 01:21:49 2024 00:44:17.472 read: IOPS=347, BW=1389KiB/s (1422kB/s)(13.6MiB/10044msec) 00:44:17.472 slat (nsec): min=12241, max=93451, avg=33509.33, stdev=14321.11 00:44:17.472 clat (msec): min=31, max=104, avg=45.76, stdev= 5.03 00:44:17.472 lat (msec): min=31, max=104, avg=45.79, stdev= 5.03 00:44:17.472 clat percentiles (msec): 00:44:17.472 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:17.472 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:44:17.472 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 47], 00:44:17.472 | 99.00th=[ 50], 99.50th=[ 99], 99.90th=[ 105], 99.95th=[ 105], 00:44:17.472 | 99.99th=[ 105] 00:44:17.472 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1388.80, stdev=62.85, samples=20 00:44:17.472 iops : min= 320, max= 384, avg=347.20, stdev=15.71, samples=20 00:44:17.472 lat (msec) : 50=99.03%, 100=0.52%, 250=0.46% 00:44:17.472 cpu : usr=97.13%, sys=1.77%, ctx=96, majf=0, minf=1632 00:44:17.472 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:17.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.472 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.472 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.472 filename2: (groupid=0, jobs=1): err= 0: pid=3213466: Fri Dec 6 01:21:49 2024 00:44:17.472 read: IOPS=353, BW=1414KiB/s (1448kB/s)(13.9MiB/10094msec) 00:44:17.472 slat (nsec): min=9501, max=87016, avg=23541.29, stdev=10593.68 00:44:17.472 clat (msec): min=6, max=104, avg=45.06, stdev= 5.87 00:44:17.472 lat (msec): min=6, max=104, avg=45.08, stdev= 5.87 00:44:17.472 clat percentiles (msec): 00:44:17.472 | 1.00th=[ 19], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:17.472 | 30.00th=[ 45], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 46], 00:44:17.472 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 47], 00:44:17.472 | 99.00th=[ 49], 99.50th=[ 57], 99.90th=[ 105], 99.95th=[ 105], 00:44:17.472 | 99.99th=[ 105] 00:44:17.472 bw ( KiB/s): min= 1280, max= 1792, per=4.26%, avg=1420.80, stdev=91.93, samples=20 00:44:17.472 iops : min= 320, max= 448, avg=355.20, stdev=22.98, samples=20 00:44:17.472 lat (msec) : 10=0.70%, 20=0.64%, 50=98.09%, 100=0.11%, 250=0.45% 00:44:17.472 cpu : usr=98.32%, sys=1.18%, ctx=14, majf=0, minf=1632 00:44:17.472 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:17.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.472 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.472 issued rwts: total=3568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:17.472 00:44:17.472 Run status group 0 (all jobs): 00:44:17.472 READ: bw=32.5MiB/s (34.1MB/s), 1384KiB/s-1473KiB/s (1417kB/s-1509kB/s), io=330MiB (346MB), run=10010-10128msec 00:44:18.408 ----------------------------------------------------- 00:44:18.408 Suppressions used: 00:44:18.408 count bytes template 00:44:18.408 45 402 /usr/src/fio/parse.c 00:44:18.408 1 8 libtcmalloc_minimal.so 00:44:18.408 1 904 libcrypto.so 00:44:18.408 ----------------------------------------------------- 00:44:18.408 00:44:18.408 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:44:18.408 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:18.408 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:18.408 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:18.408 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:18.408 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:18.408 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.408 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.408 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.408 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:18.408 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.408 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.408 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.408 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:18.408 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:18.408 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:18.408 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:18.408 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.409 bdev_null0 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.409 [2024-12-06 01:21:50.868806] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.409 bdev_null1 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:18.409 { 00:44:18.409 "params": { 00:44:18.409 "name": "Nvme$subsystem", 00:44:18.409 "trtype": "$TEST_TRANSPORT", 00:44:18.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:18.409 "adrfam": "ipv4", 00:44:18.409 "trsvcid": "$NVMF_PORT", 00:44:18.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:18.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:18.409 "hdgst": ${hdgst:-false}, 00:44:18.409 "ddgst": ${ddgst:-false} 00:44:18.409 }, 00:44:18.409 "method": "bdev_nvme_attach_controller" 00:44:18.409 } 00:44:18.409 EOF 00:44:18.409 )") 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:18.409 { 00:44:18.409 "params": { 00:44:18.409 "name": "Nvme$subsystem", 00:44:18.409 "trtype": "$TEST_TRANSPORT", 00:44:18.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:18.409 "adrfam": "ipv4", 00:44:18.409 "trsvcid": "$NVMF_PORT", 00:44:18.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:18.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:18.409 "hdgst": ${hdgst:-false}, 00:44:18.409 "ddgst": ${ddgst:-false} 00:44:18.409 }, 00:44:18.409 "method": "bdev_nvme_attach_controller" 00:44:18.409 } 00:44:18.409 EOF 00:44:18.409 )") 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:44:18.409 01:21:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:18.409 "params": { 00:44:18.409 "name": "Nvme0", 00:44:18.409 "trtype": "tcp", 00:44:18.409 "traddr": "10.0.0.2", 00:44:18.409 "adrfam": "ipv4", 00:44:18.409 "trsvcid": "4420", 00:44:18.409 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:18.409 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:18.410 "hdgst": false, 00:44:18.410 "ddgst": false 00:44:18.410 }, 00:44:18.410 "method": "bdev_nvme_attach_controller" 00:44:18.410 },{ 00:44:18.410 "params": { 00:44:18.410 "name": "Nvme1", 00:44:18.410 "trtype": "tcp", 00:44:18.410 "traddr": "10.0.0.2", 00:44:18.410 "adrfam": "ipv4", 00:44:18.410 "trsvcid": "4420", 00:44:18.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:18.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:18.410 "hdgst": false, 00:44:18.410 "ddgst": false 00:44:18.410 }, 00:44:18.410 "method": "bdev_nvme_attach_controller" 00:44:18.410 }' 00:44:18.410 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:18.410 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:18.410 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:44:18.410 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:18.410 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:18.668 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:18.668 ... 00:44:18.668 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:18.668 ... 00:44:18.668 fio-3.35 00:44:18.668 Starting 4 threads 00:44:25.224 00:44:25.224 filename0: (groupid=0, jobs=1): err= 0: pid=3215072: Fri Dec 6 01:21:57 2024 00:44:25.224 read: IOPS=1473, BW=11.5MiB/s (12.1MB/s)(57.6MiB/5002msec) 00:44:25.224 slat (nsec): min=9679, max=83136, avg=27758.22, stdev=11064.73 00:44:25.224 clat (usec): min=1036, max=17565, avg=5326.31, stdev=661.40 00:44:25.224 lat (usec): min=1055, max=17611, avg=5354.07, stdev=661.76 00:44:25.224 clat percentiles (usec): 00:44:25.224 | 1.00th=[ 2638], 5.00th=[ 4883], 10.00th=[ 5080], 20.00th=[ 5145], 00:44:25.224 | 30.00th=[ 5211], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5342], 00:44:25.224 | 70.00th=[ 5407], 80.00th=[ 5473], 90.00th=[ 5604], 95.00th=[ 5669], 00:44:25.224 | 99.00th=[ 8356], 99.50th=[ 8848], 99.90th=[12911], 99.95th=[13042], 00:44:25.224 | 99.99th=[17695] 00:44:25.224 bw ( KiB/s): min=11504, max=11920, per=25.03%, avg=11779.56, stdev=136.89, samples=9 00:44:25.224 iops : min= 1438, max= 1490, avg=1472.44, stdev=17.11, samples=9 00:44:25.224 lat (msec) : 2=0.49%, 4=1.48%, 10=97.92%, 20=0.11% 00:44:25.224 cpu : usr=96.02%, sys=3.40%, ctx=9, majf=0, minf=1634 00:44:25.224 IO depths : 1=0.3%, 2=20.3%, 4=53.6%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:25.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:25.224 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:25.224 issued rwts: total=7370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:25.224 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:25.224 filename0: (groupid=0, jobs=1): err= 0: pid=3215073: Fri Dec 6 01:21:57 2024 00:44:25.224 read: IOPS=1479, BW=11.6MiB/s (12.1MB/s)(57.8MiB/5003msec) 00:44:25.224 slat (nsec): min=5683, max=71346, avg=25722.25, stdev=10391.69 00:44:25.224 clat (usec): min=1640, max=9110, avg=5319.07, stdev=370.09 00:44:25.224 lat (usec): min=1654, max=9127, avg=5344.79, stdev=370.56 00:44:25.224 clat percentiles (usec): 00:44:25.224 | 1.00th=[ 4015], 5.00th=[ 4883], 10.00th=[ 5080], 20.00th=[ 5145], 00:44:25.224 | 30.00th=[ 5276], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5407], 00:44:25.224 | 70.00th=[ 5407], 80.00th=[ 5473], 90.00th=[ 5604], 95.00th=[ 5669], 00:44:25.224 | 99.00th=[ 6063], 99.50th=[ 6652], 99.90th=[ 8029], 99.95th=[ 8160], 00:44:25.224 | 99.99th=[ 9110] 00:44:25.224 bw ( KiB/s): min=11648, max=11968, per=25.18%, avg=11847.11, stdev=113.73, samples=9 00:44:25.224 iops : min= 1456, max= 1496, avg=1480.89, stdev=14.22, samples=9 00:44:25.224 lat (msec) : 2=0.16%, 4=0.80%, 10=99.04% 00:44:25.224 cpu : usr=96.56%, sys=2.82%, ctx=7, majf=0, minf=1636 00:44:25.224 IO depths : 1=0.7%, 2=16.3%, 4=57.1%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:25.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:25.224 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:25.224 issued rwts: total=7400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:25.224 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:25.224 filename1: (groupid=0, jobs=1): err= 0: pid=3215074: Fri Dec 6 01:21:57 2024 00:44:25.224 read: IOPS=1459, BW=11.4MiB/s (12.0MB/s)(57.1MiB/5004msec) 00:44:25.224 slat (nsec): min=5887, max=93287, avg=28426.63, stdev=11534.16 00:44:25.224 clat (usec): min=969, max=17284, avg=5367.99, stdev=734.23 00:44:25.224 lat (usec): min=994, max=17305, avg=5396.41, stdev=733.85 00:44:25.224 clat percentiles (usec): 00:44:25.224 | 1.00th=[ 2868], 5.00th=[ 5014], 10.00th=[ 5080], 20.00th=[ 5145], 00:44:25.224 | 30.00th=[ 5211], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5342], 00:44:25.224 | 70.00th=[ 5407], 80.00th=[ 5473], 90.00th=[ 5604], 95.00th=[ 5800], 00:44:25.224 | 99.00th=[ 8717], 99.50th=[ 9241], 99.90th=[15795], 99.95th=[15926], 00:44:25.224 | 99.99th=[17171] 00:44:25.224 bw ( KiB/s): min=11424, max=11904, per=24.82%, avg=11678.22, stdev=149.33, samples=9 00:44:25.224 iops : min= 1428, max= 1488, avg=1459.78, stdev=18.67, samples=9 00:44:25.224 lat (usec) : 1000=0.01% 00:44:25.224 lat (msec) : 2=0.63%, 4=0.63%, 10=98.62%, 20=0.11% 00:44:25.225 cpu : usr=96.32%, sys=3.06%, ctx=12, majf=0, minf=1634 00:44:25.225 IO depths : 1=1.2%, 2=19.8%, 4=54.4%, 8=24.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:25.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:25.225 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:25.225 issued rwts: total=7305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:25.225 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:25.225 filename1: (groupid=0, jobs=1): err= 0: pid=3215075: Fri Dec 6 01:21:57 2024 00:44:25.225 read: IOPS=1471, BW=11.5MiB/s (12.1MB/s)(57.5MiB/5002msec) 00:44:25.225 slat (nsec): min=5845, max=93309, avg=25255.66, stdev=9475.23 00:44:25.225 clat (usec): min=1021, max=17448, avg=5348.83, stdev=528.26 00:44:25.225 lat (usec): min=1051, max=17470, avg=5374.08, stdev=528.25 00:44:25.225 clat percentiles (usec): 00:44:25.225 | 1.00th=[ 3982], 5.00th=[ 4948], 10.00th=[ 5080], 20.00th=[ 5211], 00:44:25.225 | 30.00th=[ 5276], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5407], 00:44:25.225 | 70.00th=[ 5407], 80.00th=[ 5473], 90.00th=[ 5604], 95.00th=[ 5669], 00:44:25.225 | 99.00th=[ 7046], 99.50th=[ 8455], 99.90th=[12649], 99.95th=[12649], 00:44:25.225 | 99.99th=[17433] 00:44:25.225 bw ( KiB/s): min=11543, max=11952, per=25.03%, avg=11776.78, stdev=128.63, samples=9 00:44:25.225 iops : min= 1442, max= 1494, avg=1472.00, stdev=16.28, samples=9 00:44:25.225 lat (msec) : 2=0.20%, 4=0.82%, 10=98.87%, 20=0.11% 00:44:25.225 cpu : usr=95.30%, sys=3.68%, ctx=118, majf=0, minf=1636 00:44:25.225 IO depths : 1=0.3%, 2=17.5%, 4=55.7%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:25.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:25.225 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:25.225 issued rwts: total=7359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:25.225 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:25.225 00:44:25.225 Run status group 0 (all jobs): 00:44:25.225 READ: bw=46.0MiB/s (48.2MB/s), 11.4MiB/s-11.6MiB/s (12.0MB/s-12.1MB/s), io=230MiB (241MB), run=5002-5004msec 00:44:26.160 ----------------------------------------------------- 00:44:26.160 Suppressions used: 00:44:26.160 count bytes template 00:44:26.160 6 52 /usr/src/fio/parse.c 00:44:26.160 1 8 libtcmalloc_minimal.so 00:44:26.160 1 904 libcrypto.so 00:44:26.160 ----------------------------------------------------- 00:44:26.160 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.160 00:44:26.160 real 0m28.860s 00:44:26.160 user 4m39.262s 00:44:26.160 sys 0m6.599s 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:26.160 01:21:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:26.160 ************************************ 00:44:26.160 END TEST fio_dif_rand_params 00:44:26.160 ************************************ 00:44:26.160 01:21:58 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:44:26.160 01:21:58 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:26.160 01:21:58 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:26.160 01:21:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:26.160 ************************************ 00:44:26.160 START TEST fio_dif_digest 00:44:26.160 ************************************ 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:26.160 bdev_null0 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:26.160 [2024-12-06 01:21:58.778054] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:26.160 { 00:44:26.160 "params": { 00:44:26.160 "name": "Nvme$subsystem", 00:44:26.160 "trtype": "$TEST_TRANSPORT", 00:44:26.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:26.160 "adrfam": "ipv4", 00:44:26.160 "trsvcid": "$NVMF_PORT", 00:44:26.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:26.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:26.160 "hdgst": ${hdgst:-false}, 00:44:26.160 "ddgst": ${ddgst:-false} 00:44:26.160 }, 00:44:26.160 "method": "bdev_nvme_attach_controller" 00:44:26.160 } 00:44:26.160 EOF 00:44:26.160 )") 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:26.160 "params": { 00:44:26.160 "name": "Nvme0", 00:44:26.160 "trtype": "tcp", 00:44:26.160 "traddr": "10.0.0.2", 00:44:26.160 "adrfam": "ipv4", 00:44:26.160 "trsvcid": "4420", 00:44:26.160 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:26.160 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:26.160 "hdgst": true, 00:44:26.160 "ddgst": true 00:44:26.160 }, 00:44:26.160 "method": "bdev_nvme_attach_controller" 00:44:26.160 }' 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # break 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:26.160 01:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:26.418 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:44:26.418 ... 00:44:26.418 fio-3.35 00:44:26.419 Starting 3 threads 00:44:38.615 00:44:38.615 filename0: (groupid=0, jobs=1): err= 0: pid=3216045: Fri Dec 6 01:22:10 2024 00:44:38.615 read: IOPS=156, BW=19.5MiB/s (20.5MB/s)(196MiB/10047msec) 00:44:38.615 slat (nsec): min=5687, max=52680, avg=20434.61, stdev=4333.74 00:44:38.615 clat (usec): min=14459, max=57134, avg=19145.85, stdev=1702.86 00:44:38.615 lat (usec): min=14476, max=57152, avg=19166.28, stdev=1702.83 00:44:38.615 clat percentiles (usec): 00:44:38.615 | 1.00th=[16909], 5.00th=[17433], 10.00th=[17695], 20.00th=[18220], 00:44:38.615 | 30.00th=[18482], 40.00th=[18744], 50.00th=[19006], 60.00th=[19268], 00:44:38.615 | 70.00th=[19530], 80.00th=[20055], 90.00th=[20579], 95.00th=[21103], 00:44:38.615 | 99.00th=[22414], 99.50th=[22938], 99.90th=[48497], 99.95th=[56886], 00:44:38.615 | 99.99th=[56886] 00:44:38.615 bw ( KiB/s): min=18688, max=20992, per=33.06%, avg=20070.40, stdev=650.82, samples=20 00:44:38.615 iops : min= 146, max= 164, avg=156.80, stdev= 5.08, samples=20 00:44:38.615 lat (msec) : 20=80.70%, 50=19.24%, 100=0.06% 00:44:38.615 cpu : usr=92.64%, sys=6.80%, ctx=19, majf=0, minf=1632 00:44:38.615 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:38.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.615 issued rwts: total=1570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.615 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:38.615 filename0: (groupid=0, jobs=1): err= 0: pid=3216046: Fri Dec 6 01:22:10 2024 00:44:38.615 read: IOPS=164, BW=20.5MiB/s (21.5MB/s)(205MiB/10008msec) 00:44:38.615 slat (nsec): min=8664, max=65362, avg=24078.33, stdev=7129.62 00:44:38.615 clat (usec): min=10754, max=29604, avg=18255.37, stdev=1389.54 00:44:38.615 lat (usec): min=10782, max=29647, avg=18279.45, stdev=1389.59 00:44:38.615 clat percentiles (usec): 00:44:38.615 | 1.00th=[15270], 5.00th=[16057], 10.00th=[16450], 20.00th=[17171], 00:44:38.615 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18482], 00:44:38.615 | 70.00th=[19006], 80.00th=[19268], 90.00th=[19792], 95.00th=[20579], 00:44:38.615 | 99.00th=[21627], 99.50th=[21890], 99.90th=[27395], 99.95th=[29492], 00:44:38.615 | 99.99th=[29492] 00:44:38.615 bw ( KiB/s): min=19968, max=22272, per=34.56%, avg=20979.20, stdev=702.19, samples=20 00:44:38.615 iops : min= 156, max= 174, avg=163.90, stdev= 5.49, samples=20 00:44:38.615 lat (msec) : 20=91.41%, 50=8.59% 00:44:38.615 cpu : usr=92.83%, sys=6.59%, ctx=15, majf=0, minf=1634 00:44:38.615 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:38.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.616 issued rwts: total=1642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.616 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:38.616 filename0: (groupid=0, jobs=1): err= 0: pid=3216047: Fri Dec 6 01:22:10 2024 00:44:38.616 read: IOPS=154, BW=19.3MiB/s (20.3MB/s)(194MiB/10049msec) 00:44:38.616 slat (nsec): min=6009, max=53016, avg=21639.25, stdev=4740.56 00:44:38.616 clat (usec): min=14558, max=52750, avg=19346.46, stdev=1817.37 00:44:38.616 lat (usec): min=14583, max=52795, avg=19368.10, stdev=1817.24 00:44:38.616 clat percentiles (usec): 00:44:38.616 | 1.00th=[15795], 5.00th=[17171], 10.00th=[17695], 20.00th=[18220], 00:44:38.616 | 30.00th=[18744], 40.00th=[19006], 50.00th=[19268], 60.00th=[19530], 00:44:38.616 | 70.00th=[19792], 80.00th=[20317], 90.00th=[20841], 95.00th=[21627], 00:44:38.616 | 99.00th=[22676], 99.50th=[22938], 99.90th=[49546], 99.95th=[52691], 00:44:38.616 | 99.99th=[52691] 00:44:38.616 bw ( KiB/s): min=18468, max=21760, per=32.70%, avg=19854.60, stdev=736.85, samples=20 00:44:38.616 iops : min= 144, max= 170, avg=155.10, stdev= 5.78, samples=20 00:44:38.616 lat (msec) : 20=72.14%, 50=27.80%, 100=0.06% 00:44:38.616 cpu : usr=93.41%, sys=6.02%, ctx=16, majf=0, minf=1636 00:44:38.616 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:38.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.616 issued rwts: total=1554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.616 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:38.616 00:44:38.616 Run status group 0 (all jobs): 00:44:38.616 READ: bw=59.3MiB/s (62.2MB/s), 19.3MiB/s-20.5MiB/s (20.3MB/s-21.5MB/s), io=596MiB (625MB), run=10008-10049msec 00:44:38.616 ----------------------------------------------------- 00:44:38.616 Suppressions used: 00:44:38.616 count bytes template 00:44:38.616 5 44 /usr/src/fio/parse.c 00:44:38.616 1 8 libtcmalloc_minimal.so 00:44:38.616 1 904 libcrypto.so 00:44:38.616 ----------------------------------------------------- 00:44:38.616 00:44:38.616 01:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:44:38.616 01:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:44:38.616 01:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:44:38.616 01:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:38.616 01:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:44:38.616 01:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:38.616 01:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:38.616 01:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:38.616 01:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:38.616 01:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:38.616 01:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:38.616 01:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:38.616 01:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:38.616 00:44:38.616 real 0m12.330s 00:44:38.616 user 0m30.240s 00:44:38.616 sys 0m2.364s 00:44:38.616 01:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:38.616 01:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:38.616 ************************************ 00:44:38.616 END TEST fio_dif_digest 00:44:38.616 ************************************ 00:44:38.616 01:22:11 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:44:38.616 01:22:11 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:44:38.616 01:22:11 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:38.616 01:22:11 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:44:38.616 01:22:11 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:38.616 01:22:11 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:44:38.616 01:22:11 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:38.616 01:22:11 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:38.616 rmmod nvme_tcp 00:44:38.616 rmmod nvme_fabrics 00:44:38.616 rmmod nvme_keyring 00:44:38.616 01:22:11 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:38.616 01:22:11 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:44:38.616 01:22:11 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:44:38.616 01:22:11 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3208519 ']' 00:44:38.616 01:22:11 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3208519 00:44:38.616 01:22:11 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3208519 ']' 00:44:38.616 01:22:11 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3208519 00:44:38.616 01:22:11 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:44:38.616 01:22:11 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:38.616 01:22:11 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3208519 00:44:38.616 01:22:11 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:38.616 01:22:11 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:38.616 01:22:11 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3208519' 00:44:38.616 killing process with pid 3208519 00:44:38.616 01:22:11 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3208519 00:44:38.616 01:22:11 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3208519 00:44:39.991 01:22:12 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:44:39.991 01:22:12 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:40.928 Waiting for block devices as requested 00:44:40.928 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:44:40.928 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:40.928 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:41.187 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:41.187 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:41.187 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:41.446 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:41.446 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:41.446 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:41.446 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:41.446 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:41.705 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:41.705 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:41.705 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:41.705 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:41.964 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:41.964 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:41.964 01:22:14 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:41.964 01:22:14 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:41.964 01:22:14 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:44:41.964 01:22:14 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:41.964 01:22:14 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:44:41.964 01:22:14 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:44:41.964 01:22:14 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:41.964 01:22:14 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:41.964 01:22:14 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:41.964 01:22:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:41.964 01:22:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:44.504 01:22:16 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:44.504 00:44:44.504 real 1m16.944s 00:44:44.504 user 6m50.307s 00:44:44.504 sys 0m17.887s 00:44:44.504 01:22:16 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:44.504 01:22:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:44.504 ************************************ 00:44:44.504 END TEST nvmf_dif 00:44:44.504 ************************************ 00:44:44.504 01:22:16 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:44.504 01:22:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:44.504 01:22:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:44.504 01:22:16 -- common/autotest_common.sh@10 -- # set +x 00:44:44.504 ************************************ 00:44:44.504 START TEST nvmf_abort_qd_sizes 00:44:44.504 ************************************ 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:44.504 * Looking for test storage... 00:44:44.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:44.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:44.504 --rc genhtml_branch_coverage=1 00:44:44.504 --rc genhtml_function_coverage=1 00:44:44.504 --rc genhtml_legend=1 00:44:44.504 --rc geninfo_all_blocks=1 00:44:44.504 --rc geninfo_unexecuted_blocks=1 00:44:44.504 00:44:44.504 ' 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:44.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:44.504 --rc genhtml_branch_coverage=1 00:44:44.504 --rc genhtml_function_coverage=1 00:44:44.504 --rc genhtml_legend=1 00:44:44.504 --rc geninfo_all_blocks=1 00:44:44.504 --rc geninfo_unexecuted_blocks=1 00:44:44.504 00:44:44.504 ' 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:44.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:44.504 --rc genhtml_branch_coverage=1 00:44:44.504 --rc genhtml_function_coverage=1 00:44:44.504 --rc genhtml_legend=1 00:44:44.504 --rc geninfo_all_blocks=1 00:44:44.504 --rc geninfo_unexecuted_blocks=1 00:44:44.504 00:44:44.504 ' 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:44.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:44.504 --rc genhtml_branch_coverage=1 00:44:44.504 --rc genhtml_function_coverage=1 00:44:44.504 --rc genhtml_legend=1 00:44:44.504 --rc geninfo_all_blocks=1 00:44:44.504 --rc geninfo_unexecuted_blocks=1 00:44:44.504 00:44:44.504 ' 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:44.504 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:44.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:44:44.505 01:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:44:46.407 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:44:46.407 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:44:46.407 Found net devices under 0000:0a:00.0: cvl_0_0 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:46.407 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:44:46.408 Found net devices under 0000:0a:00.1: cvl_0_1 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:46.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:46.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:44:46.408 00:44:46.408 --- 10.0.0.2 ping statistics --- 00:44:46.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:46.408 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:46.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:46.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:44:46.408 00:44:46.408 --- 10.0.0.1 ping statistics --- 00:44:46.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:46.408 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:44:46.408 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:47.790 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:47.790 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:47.790 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:47.790 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:47.790 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:47.790 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:47.790 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:47.790 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:47.790 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:47.790 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:47.790 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:47.790 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:47.790 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:47.790 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:47.790 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:47.790 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:48.731 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:44:48.731 01:22:21 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:48.731 01:22:21 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:48.731 01:22:21 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:48.731 01:22:21 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:48.731 01:22:21 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:48.731 01:22:21 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:48.731 01:22:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:44:48.731 01:22:21 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:48.731 01:22:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:48.731 01:22:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:48.731 01:22:21 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3220990 00:44:48.731 01:22:21 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:44:48.731 01:22:21 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3220990 00:44:48.731 01:22:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3220990 ']' 00:44:48.731 01:22:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:48.731 01:22:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:48.731 01:22:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:48.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:48.731 01:22:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:48.731 01:22:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:48.731 [2024-12-06 01:22:21.379477] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:44:48.731 [2024-12-06 01:22:21.379639] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:48.990 [2024-12-06 01:22:21.527780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:48.990 [2024-12-06 01:22:21.666402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:48.990 [2024-12-06 01:22:21.666485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:48.990 [2024-12-06 01:22:21.666506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:48.990 [2024-12-06 01:22:21.666526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:48.990 [2024-12-06 01:22:21.666543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:48.990 [2024-12-06 01:22:21.669262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:48.990 [2024-12-06 01:22:21.669335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:48.990 [2024-12-06 01:22:21.669421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:48.990 [2024-12-06 01:22:21.669427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:49.924 01:22:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:49.924 ************************************ 00:44:49.924 START TEST spdk_target_abort 00:44:49.924 ************************************ 00:44:49.924 01:22:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:44:49.924 01:22:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:44:49.924 01:22:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:44:49.924 01:22:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:49.924 01:22:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:53.218 spdk_targetn1 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:53.218 [2024-12-06 01:22:25.303330] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:53.218 [2024-12-06 01:22:25.350150] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:53.218 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:44:53.219 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:53.219 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:44:53.219 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:53.219 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:53.219 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:53.219 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:56.516 Initializing NVMe Controllers 00:44:56.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:56.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:56.516 Initialization complete. Launching workers. 00:44:56.516 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8698, failed: 0 00:44:56.516 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1166, failed to submit 7532 00:44:56.516 success 723, unsuccessful 443, failed 0 00:44:56.516 01:22:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:56.516 01:22:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:59.885 Initializing NVMe Controllers 00:44:59.885 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:59.885 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:59.885 Initialization complete. Launching workers. 00:44:59.885 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8609, failed: 0 00:44:59.885 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1215, failed to submit 7394 00:44:59.885 success 304, unsuccessful 911, failed 0 00:44:59.885 01:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:59.885 01:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:03.175 Initializing NVMe Controllers 00:45:03.175 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:45:03.175 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:03.175 Initialization complete. Launching workers. 00:45:03.175 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27271, failed: 0 00:45:03.175 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2647, failed to submit 24624 00:45:03.175 success 188, unsuccessful 2459, failed 0 00:45:03.175 01:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:45:03.175 01:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.175 01:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:03.175 01:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.175 01:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:45:03.175 01:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.175 01:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:04.548 01:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.548 01:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3220990 00:45:04.548 01:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3220990 ']' 00:45:04.548 01:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3220990 00:45:04.548 01:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:45:04.548 01:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:04.548 01:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3220990 00:45:04.548 01:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:04.548 01:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:04.548 01:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3220990' 00:45:04.548 killing process with pid 3220990 00:45:04.548 01:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3220990 00:45:04.548 01:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3220990 00:45:05.486 00:45:05.486 real 0m15.424s 00:45:05.486 user 1m0.392s 00:45:05.486 sys 0m2.756s 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:05.486 ************************************ 00:45:05.486 END TEST spdk_target_abort 00:45:05.486 ************************************ 00:45:05.486 01:22:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:45:05.486 01:22:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:05.486 01:22:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:05.486 01:22:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:05.486 ************************************ 00:45:05.486 START TEST kernel_target_abort 00:45:05.486 ************************************ 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:45:05.486 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:06.422 Waiting for block devices as requested 00:45:06.422 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:45:06.680 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:06.680 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:06.680 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:06.939 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:06.939 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:06.939 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:06.939 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:07.198 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:07.198 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:07.198 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:07.198 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:07.458 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:07.458 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:07.458 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:07.717 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:07.717 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:07.975 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:45:07.976 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:45:07.976 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:45:07.976 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:45:07.976 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:45:07.976 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:45:07.976 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:45:07.976 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:45:07.976 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:45:08.235 No valid GPT data, bailing 00:45:08.235 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:45:08.235 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:45:08.235 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:45:08.235 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:45:08.235 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:45:08.236 00:45:08.236 Discovery Log Number of Records 2, Generation counter 2 00:45:08.236 =====Discovery Log Entry 0====== 00:45:08.236 trtype: tcp 00:45:08.236 adrfam: ipv4 00:45:08.236 subtype: current discovery subsystem 00:45:08.236 treq: not specified, sq flow control disable supported 00:45:08.236 portid: 1 00:45:08.236 trsvcid: 4420 00:45:08.236 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:45:08.236 traddr: 10.0.0.1 00:45:08.236 eflags: none 00:45:08.236 sectype: none 00:45:08.236 =====Discovery Log Entry 1====== 00:45:08.236 trtype: tcp 00:45:08.236 adrfam: ipv4 00:45:08.236 subtype: nvme subsystem 00:45:08.236 treq: not specified, sq flow control disable supported 00:45:08.236 portid: 1 00:45:08.236 trsvcid: 4420 00:45:08.236 subnqn: nqn.2016-06.io.spdk:testnqn 00:45:08.236 traddr: 10.0.0.1 00:45:08.236 eflags: none 00:45:08.236 sectype: none 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:08.236 01:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:11.518 Initializing NVMe Controllers 00:45:11.518 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:11.518 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:11.518 Initialization complete. Launching workers. 00:45:11.518 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36690, failed: 0 00:45:11.518 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36690, failed to submit 0 00:45:11.518 success 0, unsuccessful 36690, failed 0 00:45:11.518 01:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:11.518 01:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:14.804 Initializing NVMe Controllers 00:45:14.804 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:14.804 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:14.804 Initialization complete. Launching workers. 00:45:14.804 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71118, failed: 0 00:45:14.804 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17934, failed to submit 53184 00:45:14.804 success 0, unsuccessful 17934, failed 0 00:45:14.804 01:22:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:14.804 01:22:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:18.100 Initializing NVMe Controllers 00:45:18.100 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:18.100 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:18.100 Initialization complete. Launching workers. 00:45:18.100 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67191, failed: 0 00:45:18.100 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16782, failed to submit 50409 00:45:18.100 success 0, unsuccessful 16782, failed 0 00:45:18.100 01:22:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:45:18.100 01:22:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:45:18.100 01:22:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:45:18.100 01:22:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:18.100 01:22:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:18.100 01:22:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:45:18.100 01:22:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:18.100 01:22:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:45:18.100 01:22:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:45:18.100 01:22:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:19.036 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:19.036 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:19.036 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:19.036 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:19.036 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:19.036 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:19.036 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:19.036 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:19.036 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:19.036 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:19.036 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:19.036 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:19.036 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:19.036 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:19.036 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:19.295 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:20.234 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:45:20.234 00:45:20.234 real 0m14.897s 00:45:20.234 user 0m7.257s 00:45:20.234 sys 0m3.443s 00:45:20.234 01:22:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:20.234 01:22:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:20.234 ************************************ 00:45:20.234 END TEST kernel_target_abort 00:45:20.234 ************************************ 00:45:20.234 01:22:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:45:20.234 01:22:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:45:20.234 01:22:52 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:20.234 01:22:52 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:45:20.234 01:22:52 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:20.234 01:22:52 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:45:20.234 01:22:52 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:20.234 01:22:52 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:20.234 rmmod nvme_tcp 00:45:20.234 rmmod nvme_fabrics 00:45:20.234 rmmod nvme_keyring 00:45:20.234 01:22:52 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:20.234 01:22:52 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:45:20.234 01:22:52 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:45:20.234 01:22:52 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3220990 ']' 00:45:20.234 01:22:52 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3220990 00:45:20.234 01:22:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3220990 ']' 00:45:20.234 01:22:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3220990 00:45:20.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3220990) - No such process 00:45:20.234 01:22:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3220990 is not found' 00:45:20.234 Process with pid 3220990 is not found 00:45:20.234 01:22:52 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:45:20.234 01:22:52 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:21.613 Waiting for block devices as requested 00:45:21.613 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:45:21.613 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:21.613 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:21.872 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:21.872 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:21.872 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:21.872 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:22.133 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:22.133 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:22.133 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:22.133 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:22.393 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:22.393 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:22.393 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:22.393 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:22.652 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:22.652 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:22.652 01:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:22.652 01:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:22.652 01:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:45:22.652 01:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:45:22.652 01:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:22.652 01:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:45:22.652 01:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:22.652 01:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:22.652 01:22:55 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:22.652 01:22:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:22.652 01:22:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:25.190 01:22:57 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:25.190 00:45:25.190 real 0m40.615s 00:45:25.190 user 1m10.110s 00:45:25.190 sys 0m9.722s 00:45:25.190 01:22:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:25.190 01:22:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:25.190 ************************************ 00:45:25.190 END TEST nvmf_abort_qd_sizes 00:45:25.190 ************************************ 00:45:25.190 01:22:57 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:25.190 01:22:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:25.190 01:22:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:25.190 01:22:57 -- common/autotest_common.sh@10 -- # set +x 00:45:25.190 ************************************ 00:45:25.190 START TEST keyring_file 00:45:25.190 ************************************ 00:45:25.190 01:22:57 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:25.190 * Looking for test storage... 00:45:25.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:25.190 01:22:57 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:45:25.190 01:22:57 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:45:25.190 01:22:57 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:45:25.190 01:22:57 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@345 -- # : 1 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@353 -- # local d=1 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@355 -- # echo 1 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@353 -- # local d=2 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@355 -- # echo 2 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:25.190 01:22:57 keyring_file -- scripts/common.sh@368 -- # return 0 00:45:25.190 01:22:57 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:25.190 01:22:57 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:45:25.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:25.190 --rc genhtml_branch_coverage=1 00:45:25.190 --rc genhtml_function_coverage=1 00:45:25.190 --rc genhtml_legend=1 00:45:25.190 --rc geninfo_all_blocks=1 00:45:25.190 --rc geninfo_unexecuted_blocks=1 00:45:25.190 00:45:25.190 ' 00:45:25.190 01:22:57 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:45:25.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:25.190 --rc genhtml_branch_coverage=1 00:45:25.190 --rc genhtml_function_coverage=1 00:45:25.190 --rc genhtml_legend=1 00:45:25.190 --rc geninfo_all_blocks=1 00:45:25.190 --rc geninfo_unexecuted_blocks=1 00:45:25.190 00:45:25.190 ' 00:45:25.190 01:22:57 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:45:25.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:25.190 --rc genhtml_branch_coverage=1 00:45:25.190 --rc genhtml_function_coverage=1 00:45:25.190 --rc genhtml_legend=1 00:45:25.190 --rc geninfo_all_blocks=1 00:45:25.190 --rc geninfo_unexecuted_blocks=1 00:45:25.190 00:45:25.190 ' 00:45:25.190 01:22:57 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:45:25.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:25.190 --rc genhtml_branch_coverage=1 00:45:25.190 --rc genhtml_function_coverage=1 00:45:25.190 --rc genhtml_legend=1 00:45:25.190 --rc geninfo_all_blocks=1 00:45:25.190 --rc geninfo_unexecuted_blocks=1 00:45:25.190 00:45:25.190 ' 00:45:25.190 01:22:57 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:25.190 01:22:57 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:25.190 01:22:57 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:45:25.190 01:22:57 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:25.190 01:22:57 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:25.190 01:22:57 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:25.190 01:22:57 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:25.190 01:22:57 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:25.190 01:22:57 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:25.190 01:22:57 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:25.190 01:22:57 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:25.190 01:22:57 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:25.190 01:22:57 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:25.190 01:22:57 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:25.190 01:22:57 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:25.191 01:22:57 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:45:25.191 01:22:57 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:25.191 01:22:57 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:25.191 01:22:57 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:25.191 01:22:57 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:25.191 01:22:57 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:25.191 01:22:57 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:25.191 01:22:57 keyring_file -- paths/export.sh@5 -- # export PATH 00:45:25.191 01:22:57 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@51 -- # : 0 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:25.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:25.191 01:22:57 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:25.191 01:22:57 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:25.191 01:22:57 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:25.191 01:22:57 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:45:25.191 01:22:57 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:45:25.191 01:22:57 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:45:25.191 01:22:57 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:25.191 01:22:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:25.191 01:22:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:25.191 01:22:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:25.191 01:22:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:25.191 01:22:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:25.191 01:22:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2l1lihVyHa 00:45:25.191 01:22:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:45:25.191 01:22:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2l1lihVyHa 00:45:25.191 01:22:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2l1lihVyHa 00:45:25.191 01:22:57 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.2l1lihVyHa 00:45:25.191 01:22:57 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:45:25.191 01:22:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:25.191 01:22:57 keyring_file -- keyring/common.sh@17 -- # name=key1 00:45:25.191 01:22:57 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:25.191 01:22:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:25.191 01:22:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:25.191 01:22:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.u3aMDh7cHW 00:45:25.191 01:22:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:45:25.191 01:22:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:45:25.191 01:22:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.u3aMDh7cHW 00:45:25.191 01:22:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.u3aMDh7cHW 00:45:25.191 01:22:57 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.u3aMDh7cHW 00:45:25.191 01:22:57 keyring_file -- keyring/file.sh@30 -- # tgtpid=3227474 00:45:25.191 01:22:57 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:25.191 01:22:57 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3227474 00:45:25.191 01:22:57 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3227474 ']' 00:45:25.191 01:22:57 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:25.191 01:22:57 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:25.191 01:22:57 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:25.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:25.191 01:22:57 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:25.191 01:22:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:25.191 [2024-12-06 01:22:57.759554] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:45:25.191 [2024-12-06 01:22:57.759675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3227474 ] 00:45:25.191 [2024-12-06 01:22:57.893996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:25.451 [2024-12-06 01:22:58.024209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:45:26.386 01:22:58 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:26.386 [2024-12-06 01:22:58.904930] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:26.386 null0 00:45:26.386 [2024-12-06 01:22:58.936939] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:26.386 [2024-12-06 01:22:58.937570] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:26.386 01:22:58 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:26.386 [2024-12-06 01:22:58.964983] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:45:26.386 request: 00:45:26.386 { 00:45:26.386 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:45:26.386 "secure_channel": false, 00:45:26.386 "listen_address": { 00:45:26.386 "trtype": "tcp", 00:45:26.386 "traddr": "127.0.0.1", 00:45:26.386 "trsvcid": "4420" 00:45:26.386 }, 00:45:26.386 "method": "nvmf_subsystem_add_listener", 00:45:26.386 "req_id": 1 00:45:26.386 } 00:45:26.386 Got JSON-RPC error response 00:45:26.386 response: 00:45:26.386 { 00:45:26.386 "code": -32602, 00:45:26.386 "message": "Invalid parameters" 00:45:26.386 } 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:26.386 01:22:58 keyring_file -- keyring/file.sh@47 -- # bperfpid=3227619 00:45:26.386 01:22:58 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3227619 /var/tmp/bperf.sock 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3227619 ']' 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:26.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:26.386 01:22:58 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:45:26.386 01:22:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:26.386 [2024-12-06 01:22:59.053452] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:45:26.386 [2024-12-06 01:22:59.053586] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3227619 ] 00:45:26.646 [2024-12-06 01:22:59.186697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:26.646 [2024-12-06 01:22:59.309425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:27.581 01:23:00 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:27.581 01:23:00 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:45:27.581 01:23:00 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2l1lihVyHa 00:45:27.581 01:23:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2l1lihVyHa 00:45:27.581 01:23:00 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.u3aMDh7cHW 00:45:27.581 01:23:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.u3aMDh7cHW 00:45:27.840 01:23:00 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:45:27.840 01:23:00 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:45:27.840 01:23:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:27.840 01:23:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:27.840 01:23:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:28.407 01:23:00 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.2l1lihVyHa == \/\t\m\p\/\t\m\p\.\2\l\1\l\i\h\V\y\H\a ]] 00:45:28.407 01:23:00 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:45:28.407 01:23:00 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:45:28.407 01:23:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:28.407 01:23:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:28.407 01:23:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:28.407 01:23:01 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.u3aMDh7cHW == \/\t\m\p\/\t\m\p\.\u\3\a\M\D\h\7\c\H\W ]] 00:45:28.407 01:23:01 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:45:28.407 01:23:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:28.407 01:23:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:28.407 01:23:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:28.407 01:23:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:28.407 01:23:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:28.973 01:23:01 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:45:28.973 01:23:01 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:45:28.973 01:23:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:28.973 01:23:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:28.973 01:23:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:28.973 01:23:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:28.973 01:23:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:28.973 01:23:01 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:45:28.973 01:23:01 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:28.973 01:23:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:29.232 [2024-12-06 01:23:01.903517] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:29.500 nvme0n1 00:45:29.500 01:23:02 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:45:29.500 01:23:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:29.500 01:23:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:29.500 01:23:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:29.500 01:23:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:29.500 01:23:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:29.821 01:23:02 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:45:29.821 01:23:02 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:45:29.821 01:23:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:29.821 01:23:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:29.821 01:23:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:29.821 01:23:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:29.821 01:23:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:30.081 01:23:02 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:45:30.081 01:23:02 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:30.081 Running I/O for 1 seconds... 00:45:31.016 6457.00 IOPS, 25.22 MiB/s 00:45:31.016 Latency(us) 00:45:31.016 [2024-12-06T00:23:03.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:31.016 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:45:31.016 nvme0n1 : 1.01 6507.67 25.42 0.00 0.00 19573.03 6456.51 29709.65 00:45:31.016 [2024-12-06T00:23:03.725Z] =================================================================================================================== 00:45:31.016 [2024-12-06T00:23:03.725Z] Total : 6507.67 25.42 0.00 0.00 19573.03 6456.51 29709.65 00:45:31.016 { 00:45:31.016 "results": [ 00:45:31.016 { 00:45:31.016 "job": "nvme0n1", 00:45:31.016 "core_mask": "0x2", 00:45:31.016 "workload": "randrw", 00:45:31.016 "percentage": 50, 00:45:31.016 "status": "finished", 00:45:31.016 "queue_depth": 128, 00:45:31.016 "io_size": 4096, 00:45:31.016 "runtime": 1.01219, 00:45:31.016 "iops": 6507.671484602693, 00:45:31.016 "mibps": 25.42059173672927, 00:45:31.016 "io_failed": 0, 00:45:31.016 "io_timeout": 0, 00:45:31.016 "avg_latency_us": 19573.028801736305, 00:45:31.016 "min_latency_us": 6456.50962962963, 00:45:31.016 "max_latency_us": 29709.653333333332 00:45:31.016 } 00:45:31.016 ], 00:45:31.016 "core_count": 1 00:45:31.016 } 00:45:31.016 01:23:03 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:31.016 01:23:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:31.275 01:23:03 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:45:31.275 01:23:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:31.275 01:23:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:31.275 01:23:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:31.275 01:23:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:31.275 01:23:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:31.842 01:23:04 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:45:31.842 01:23:04 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:45:31.842 01:23:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:31.842 01:23:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:31.842 01:23:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:31.842 01:23:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:31.842 01:23:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:31.842 01:23:04 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:45:31.842 01:23:04 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:31.842 01:23:04 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:31.842 01:23:04 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:31.842 01:23:04 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:31.843 01:23:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:31.843 01:23:04 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:31.843 01:23:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:31.843 01:23:04 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:31.843 01:23:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:32.101 [2024-12-06 01:23:04.773955] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:32.101 [2024-12-06 01:23:04.774218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (107): Transport endpoint is not connected 00:45:32.101 [2024-12-06 01:23:04.775185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (9): Bad file descriptor 00:45:32.101 [2024-12-06 01:23:04.776183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:45:32.101 [2024-12-06 01:23:04.776218] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:32.101 [2024-12-06 01:23:04.776244] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:32.101 [2024-12-06 01:23:04.776270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:45:32.101 request: 00:45:32.101 { 00:45:32.101 "name": "nvme0", 00:45:32.101 "trtype": "tcp", 00:45:32.101 "traddr": "127.0.0.1", 00:45:32.101 "adrfam": "ipv4", 00:45:32.101 "trsvcid": "4420", 00:45:32.101 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:32.101 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:32.101 "prchk_reftag": false, 00:45:32.101 "prchk_guard": false, 00:45:32.101 "hdgst": false, 00:45:32.101 "ddgst": false, 00:45:32.101 "psk": "key1", 00:45:32.101 "allow_unrecognized_csi": false, 00:45:32.101 "method": "bdev_nvme_attach_controller", 00:45:32.101 "req_id": 1 00:45:32.101 } 00:45:32.101 Got JSON-RPC error response 00:45:32.101 response: 00:45:32.101 { 00:45:32.101 "code": -5, 00:45:32.101 "message": "Input/output error" 00:45:32.101 } 00:45:32.101 01:23:04 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:32.101 01:23:04 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:32.101 01:23:04 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:32.101 01:23:04 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:32.101 01:23:04 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:45:32.101 01:23:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:32.101 01:23:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:32.101 01:23:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:32.101 01:23:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:32.101 01:23:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:32.361 01:23:05 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:45:32.618 01:23:05 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:45:32.619 01:23:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:32.619 01:23:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:32.619 01:23:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:32.619 01:23:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:32.619 01:23:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:32.876 01:23:05 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:45:32.876 01:23:05 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:45:32.876 01:23:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:33.134 01:23:05 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:45:33.134 01:23:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:45:33.391 01:23:05 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:45:33.391 01:23:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:33.391 01:23:05 keyring_file -- keyring/file.sh@78 -- # jq length 00:45:33.649 01:23:06 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:45:33.649 01:23:06 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.2l1lihVyHa 00:45:33.649 01:23:06 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.2l1lihVyHa 00:45:33.649 01:23:06 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:33.649 01:23:06 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.2l1lihVyHa 00:45:33.649 01:23:06 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:33.649 01:23:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:33.649 01:23:06 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:33.649 01:23:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:33.649 01:23:06 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2l1lihVyHa 00:45:33.649 01:23:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2l1lihVyHa 00:45:33.906 [2024-12-06 01:23:06.418289] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.2l1lihVyHa': 0100660 00:45:33.906 [2024-12-06 01:23:06.418348] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:45:33.906 request: 00:45:33.906 { 00:45:33.906 "name": "key0", 00:45:33.906 "path": "/tmp/tmp.2l1lihVyHa", 00:45:33.906 "method": "keyring_file_add_key", 00:45:33.906 "req_id": 1 00:45:33.906 } 00:45:33.906 Got JSON-RPC error response 00:45:33.906 response: 00:45:33.906 { 00:45:33.906 "code": -1, 00:45:33.906 "message": "Operation not permitted" 00:45:33.906 } 00:45:33.906 01:23:06 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:33.906 01:23:06 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:33.906 01:23:06 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:33.906 01:23:06 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:33.906 01:23:06 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.2l1lihVyHa 00:45:33.906 01:23:06 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2l1lihVyHa 00:45:33.906 01:23:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2l1lihVyHa 00:45:34.164 01:23:06 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.2l1lihVyHa 00:45:34.164 01:23:06 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:45:34.164 01:23:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:34.164 01:23:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:34.164 01:23:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:34.164 01:23:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:34.164 01:23:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:34.422 01:23:06 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:45:34.422 01:23:06 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:34.422 01:23:06 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:34.422 01:23:06 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:34.422 01:23:06 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:34.422 01:23:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:34.422 01:23:06 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:34.422 01:23:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:34.422 01:23:06 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:34.422 01:23:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:34.680 [2024-12-06 01:23:07.236617] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.2l1lihVyHa': No such file or directory 00:45:34.680 [2024-12-06 01:23:07.236685] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:45:34.680 [2024-12-06 01:23:07.236725] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:45:34.680 [2024-12-06 01:23:07.236750] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:45:34.680 [2024-12-06 01:23:07.236772] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:45:34.680 [2024-12-06 01:23:07.236794] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:45:34.680 request: 00:45:34.680 { 00:45:34.680 "name": "nvme0", 00:45:34.680 "trtype": "tcp", 00:45:34.680 "traddr": "127.0.0.1", 00:45:34.680 "adrfam": "ipv4", 00:45:34.680 "trsvcid": "4420", 00:45:34.680 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:34.680 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:34.680 "prchk_reftag": false, 00:45:34.680 "prchk_guard": false, 00:45:34.680 "hdgst": false, 00:45:34.680 "ddgst": false, 00:45:34.680 "psk": "key0", 00:45:34.680 "allow_unrecognized_csi": false, 00:45:34.680 "method": "bdev_nvme_attach_controller", 00:45:34.680 "req_id": 1 00:45:34.680 } 00:45:34.680 Got JSON-RPC error response 00:45:34.680 response: 00:45:34.680 { 00:45:34.680 "code": -19, 00:45:34.680 "message": "No such device" 00:45:34.680 } 00:45:34.680 01:23:07 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:34.680 01:23:07 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:34.680 01:23:07 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:34.680 01:23:07 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:34.680 01:23:07 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:45:34.680 01:23:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:34.938 01:23:07 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:34.938 01:23:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:34.938 01:23:07 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:34.938 01:23:07 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:34.938 01:23:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:34.938 01:23:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:34.938 01:23:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.K7hosbL7Ww 00:45:34.938 01:23:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:34.938 01:23:07 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:34.938 01:23:07 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:45:34.938 01:23:07 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:34.938 01:23:07 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:34.938 01:23:07 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:45:34.938 01:23:07 keyring_file -- nvmf/common.sh@733 -- # python - 00:45:34.938 01:23:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.K7hosbL7Ww 00:45:34.938 01:23:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.K7hosbL7Ww 00:45:34.938 01:23:07 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.K7hosbL7Ww 00:45:34.938 01:23:07 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.K7hosbL7Ww 00:45:34.938 01:23:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.K7hosbL7Ww 00:45:35.195 01:23:07 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:35.195 01:23:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:35.760 nvme0n1 00:45:35.760 01:23:08 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:45:35.760 01:23:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:35.760 01:23:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:35.760 01:23:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:35.760 01:23:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:35.760 01:23:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:35.760 01:23:08 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:45:35.760 01:23:08 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:45:35.760 01:23:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:36.324 01:23:08 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:45:36.324 01:23:08 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:45:36.324 01:23:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:36.324 01:23:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:36.324 01:23:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:36.324 01:23:09 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:45:36.324 01:23:09 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:45:36.324 01:23:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:36.324 01:23:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:36.324 01:23:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:36.324 01:23:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:36.324 01:23:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:36.582 01:23:09 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:45:36.582 01:23:09 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:36.582 01:23:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:37.151 01:23:09 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:45:37.151 01:23:09 keyring_file -- keyring/file.sh@105 -- # jq length 00:45:37.151 01:23:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:37.151 01:23:09 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:45:37.151 01:23:09 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.K7hosbL7Ww 00:45:37.151 01:23:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.K7hosbL7Ww 00:45:37.410 01:23:10 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.u3aMDh7cHW 00:45:37.410 01:23:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.u3aMDh7cHW 00:45:37.980 01:23:10 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:37.980 01:23:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:38.238 nvme0n1 00:45:38.238 01:23:10 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:45:38.238 01:23:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:45:38.497 01:23:11 keyring_file -- keyring/file.sh@113 -- # config='{ 00:45:38.497 "subsystems": [ 00:45:38.497 { 00:45:38.497 "subsystem": "keyring", 00:45:38.497 "config": [ 00:45:38.497 { 00:45:38.497 "method": "keyring_file_add_key", 00:45:38.497 "params": { 00:45:38.497 "name": "key0", 00:45:38.497 "path": "/tmp/tmp.K7hosbL7Ww" 00:45:38.497 } 00:45:38.497 }, 00:45:38.497 { 00:45:38.497 "method": "keyring_file_add_key", 00:45:38.497 "params": { 00:45:38.497 "name": "key1", 00:45:38.497 "path": "/tmp/tmp.u3aMDh7cHW" 00:45:38.497 } 00:45:38.497 } 00:45:38.497 ] 00:45:38.497 }, 00:45:38.497 { 00:45:38.497 "subsystem": "iobuf", 00:45:38.497 "config": [ 00:45:38.497 { 00:45:38.497 "method": "iobuf_set_options", 00:45:38.497 "params": { 00:45:38.497 "small_pool_count": 8192, 00:45:38.497 "large_pool_count": 1024, 00:45:38.497 "small_bufsize": 8192, 00:45:38.497 "large_bufsize": 135168, 00:45:38.497 "enable_numa": false 00:45:38.497 } 00:45:38.497 } 00:45:38.497 ] 00:45:38.497 }, 00:45:38.497 { 00:45:38.497 "subsystem": "sock", 00:45:38.497 "config": [ 00:45:38.497 { 00:45:38.497 "method": "sock_set_default_impl", 00:45:38.497 "params": { 00:45:38.497 "impl_name": "posix" 00:45:38.497 } 00:45:38.497 }, 00:45:38.497 { 00:45:38.497 "method": "sock_impl_set_options", 00:45:38.497 "params": { 00:45:38.497 "impl_name": "ssl", 00:45:38.497 "recv_buf_size": 4096, 00:45:38.497 "send_buf_size": 4096, 00:45:38.497 "enable_recv_pipe": true, 00:45:38.497 "enable_quickack": false, 00:45:38.497 "enable_placement_id": 0, 00:45:38.497 "enable_zerocopy_send_server": true, 00:45:38.497 "enable_zerocopy_send_client": false, 00:45:38.497 "zerocopy_threshold": 0, 00:45:38.497 "tls_version": 0, 00:45:38.497 "enable_ktls": false 00:45:38.497 } 00:45:38.497 }, 00:45:38.497 { 00:45:38.497 "method": "sock_impl_set_options", 00:45:38.497 "params": { 00:45:38.497 "impl_name": "posix", 00:45:38.497 "recv_buf_size": 2097152, 00:45:38.497 "send_buf_size": 2097152, 00:45:38.497 "enable_recv_pipe": true, 00:45:38.497 "enable_quickack": false, 00:45:38.497 "enable_placement_id": 0, 00:45:38.497 "enable_zerocopy_send_server": true, 00:45:38.497 "enable_zerocopy_send_client": false, 00:45:38.497 "zerocopy_threshold": 0, 00:45:38.497 "tls_version": 0, 00:45:38.497 "enable_ktls": false 00:45:38.497 } 00:45:38.497 } 00:45:38.497 ] 00:45:38.497 }, 00:45:38.497 { 00:45:38.497 "subsystem": "vmd", 00:45:38.497 "config": [] 00:45:38.497 }, 00:45:38.497 { 00:45:38.497 "subsystem": "accel", 00:45:38.497 "config": [ 00:45:38.497 { 00:45:38.497 "method": "accel_set_options", 00:45:38.497 "params": { 00:45:38.497 "small_cache_size": 128, 00:45:38.497 "large_cache_size": 16, 00:45:38.497 "task_count": 2048, 00:45:38.497 "sequence_count": 2048, 00:45:38.497 "buf_count": 2048 00:45:38.497 } 00:45:38.497 } 00:45:38.497 ] 00:45:38.497 }, 00:45:38.497 { 00:45:38.497 "subsystem": "bdev", 00:45:38.497 "config": [ 00:45:38.497 { 00:45:38.497 "method": "bdev_set_options", 00:45:38.497 "params": { 00:45:38.497 "bdev_io_pool_size": 65535, 00:45:38.497 "bdev_io_cache_size": 256, 00:45:38.497 "bdev_auto_examine": true, 00:45:38.497 "iobuf_small_cache_size": 128, 00:45:38.497 "iobuf_large_cache_size": 16 00:45:38.497 } 00:45:38.497 }, 00:45:38.497 { 00:45:38.497 "method": "bdev_raid_set_options", 00:45:38.497 "params": { 00:45:38.497 "process_window_size_kb": 1024, 00:45:38.497 "process_max_bandwidth_mb_sec": 0 00:45:38.497 } 00:45:38.498 }, 00:45:38.498 { 00:45:38.498 "method": "bdev_iscsi_set_options", 00:45:38.498 "params": { 00:45:38.498 "timeout_sec": 30 00:45:38.498 } 00:45:38.498 }, 00:45:38.498 { 00:45:38.498 "method": "bdev_nvme_set_options", 00:45:38.498 "params": { 00:45:38.498 "action_on_timeout": "none", 00:45:38.498 "timeout_us": 0, 00:45:38.498 "timeout_admin_us": 0, 00:45:38.498 "keep_alive_timeout_ms": 10000, 00:45:38.498 "arbitration_burst": 0, 00:45:38.498 "low_priority_weight": 0, 00:45:38.498 "medium_priority_weight": 0, 00:45:38.498 "high_priority_weight": 0, 00:45:38.498 "nvme_adminq_poll_period_us": 10000, 00:45:38.498 "nvme_ioq_poll_period_us": 0, 00:45:38.498 "io_queue_requests": 512, 00:45:38.498 "delay_cmd_submit": true, 00:45:38.498 "transport_retry_count": 4, 00:45:38.498 "bdev_retry_count": 3, 00:45:38.498 "transport_ack_timeout": 0, 00:45:38.498 "ctrlr_loss_timeout_sec": 0, 00:45:38.498 "reconnect_delay_sec": 0, 00:45:38.498 "fast_io_fail_timeout_sec": 0, 00:45:38.498 "disable_auto_failback": false, 00:45:38.498 "generate_uuids": false, 00:45:38.498 "transport_tos": 0, 00:45:38.498 "nvme_error_stat": false, 00:45:38.498 "rdma_srq_size": 0, 00:45:38.498 "io_path_stat": false, 00:45:38.498 "allow_accel_sequence": false, 00:45:38.498 "rdma_max_cq_size": 0, 00:45:38.498 "rdma_cm_event_timeout_ms": 0, 00:45:38.498 "dhchap_digests": [ 00:45:38.498 "sha256", 00:45:38.498 "sha384", 00:45:38.498 "sha512" 00:45:38.498 ], 00:45:38.498 "dhchap_dhgroups": [ 00:45:38.498 "null", 00:45:38.498 "ffdhe2048", 00:45:38.498 "ffdhe3072", 00:45:38.498 "ffdhe4096", 00:45:38.498 "ffdhe6144", 00:45:38.498 "ffdhe8192" 00:45:38.498 ] 00:45:38.498 } 00:45:38.498 }, 00:45:38.498 { 00:45:38.498 "method": "bdev_nvme_attach_controller", 00:45:38.498 "params": { 00:45:38.498 "name": "nvme0", 00:45:38.498 "trtype": "TCP", 00:45:38.498 "adrfam": "IPv4", 00:45:38.498 "traddr": "127.0.0.1", 00:45:38.498 "trsvcid": "4420", 00:45:38.498 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:38.498 "prchk_reftag": false, 00:45:38.498 "prchk_guard": false, 00:45:38.498 "ctrlr_loss_timeout_sec": 0, 00:45:38.498 "reconnect_delay_sec": 0, 00:45:38.498 "fast_io_fail_timeout_sec": 0, 00:45:38.498 "psk": "key0", 00:45:38.498 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:38.498 "hdgst": false, 00:45:38.498 "ddgst": false, 00:45:38.498 "multipath": "multipath" 00:45:38.498 } 00:45:38.498 }, 00:45:38.498 { 00:45:38.498 "method": "bdev_nvme_set_hotplug", 00:45:38.498 "params": { 00:45:38.498 "period_us": 100000, 00:45:38.498 "enable": false 00:45:38.498 } 00:45:38.498 }, 00:45:38.498 { 00:45:38.498 "method": "bdev_wait_for_examine" 00:45:38.498 } 00:45:38.498 ] 00:45:38.498 }, 00:45:38.498 { 00:45:38.498 "subsystem": "nbd", 00:45:38.498 "config": [] 00:45:38.498 } 00:45:38.498 ] 00:45:38.498 }' 00:45:38.498 01:23:11 keyring_file -- keyring/file.sh@115 -- # killprocess 3227619 00:45:38.498 01:23:11 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3227619 ']' 00:45:38.498 01:23:11 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3227619 00:45:38.498 01:23:11 keyring_file -- common/autotest_common.sh@959 -- # uname 00:45:38.498 01:23:11 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:38.498 01:23:11 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3227619 00:45:38.498 01:23:11 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:38.498 01:23:11 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:38.498 01:23:11 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3227619' 00:45:38.498 killing process with pid 3227619 00:45:38.498 01:23:11 keyring_file -- common/autotest_common.sh@973 -- # kill 3227619 00:45:38.498 Received shutdown signal, test time was about 1.000000 seconds 00:45:38.498 00:45:38.498 Latency(us) 00:45:38.498 [2024-12-06T00:23:11.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:38.498 [2024-12-06T00:23:11.207Z] =================================================================================================================== 00:45:38.498 [2024-12-06T00:23:11.207Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:38.498 01:23:11 keyring_file -- common/autotest_common.sh@978 -- # wait 3227619 00:45:39.435 01:23:11 keyring_file -- keyring/file.sh@118 -- # bperfpid=3229215 00:45:39.435 01:23:11 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3229215 /var/tmp/bperf.sock 00:45:39.435 01:23:11 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3229215 ']' 00:45:39.435 01:23:11 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:39.435 01:23:11 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:39.435 01:23:11 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:45:39.435 01:23:11 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:39.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:39.435 01:23:11 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:39.435 01:23:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:39.435 01:23:11 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:45:39.435 "subsystems": [ 00:45:39.435 { 00:45:39.435 "subsystem": "keyring", 00:45:39.435 "config": [ 00:45:39.435 { 00:45:39.435 "method": "keyring_file_add_key", 00:45:39.435 "params": { 00:45:39.435 "name": "key0", 00:45:39.435 "path": "/tmp/tmp.K7hosbL7Ww" 00:45:39.435 } 00:45:39.435 }, 00:45:39.435 { 00:45:39.435 "method": "keyring_file_add_key", 00:45:39.435 "params": { 00:45:39.435 "name": "key1", 00:45:39.436 "path": "/tmp/tmp.u3aMDh7cHW" 00:45:39.436 } 00:45:39.436 } 00:45:39.436 ] 00:45:39.436 }, 00:45:39.436 { 00:45:39.436 "subsystem": "iobuf", 00:45:39.436 "config": [ 00:45:39.436 { 00:45:39.436 "method": "iobuf_set_options", 00:45:39.436 "params": { 00:45:39.436 "small_pool_count": 8192, 00:45:39.436 "large_pool_count": 1024, 00:45:39.436 "small_bufsize": 8192, 00:45:39.436 "large_bufsize": 135168, 00:45:39.436 "enable_numa": false 00:45:39.436 } 00:45:39.436 } 00:45:39.436 ] 00:45:39.436 }, 00:45:39.436 { 00:45:39.436 "subsystem": "sock", 00:45:39.436 "config": [ 00:45:39.436 { 00:45:39.436 "method": "sock_set_default_impl", 00:45:39.436 "params": { 00:45:39.436 "impl_name": "posix" 00:45:39.436 } 00:45:39.436 }, 00:45:39.436 { 00:45:39.436 "method": "sock_impl_set_options", 00:45:39.436 "params": { 00:45:39.436 "impl_name": "ssl", 00:45:39.436 "recv_buf_size": 4096, 00:45:39.436 "send_buf_size": 4096, 00:45:39.436 "enable_recv_pipe": true, 00:45:39.436 "enable_quickack": false, 00:45:39.436 "enable_placement_id": 0, 00:45:39.436 "enable_zerocopy_send_server": true, 00:45:39.436 "enable_zerocopy_send_client": false, 00:45:39.436 "zerocopy_threshold": 0, 00:45:39.436 "tls_version": 0, 00:45:39.436 "enable_ktls": false 00:45:39.436 } 00:45:39.436 }, 00:45:39.436 { 00:45:39.436 "method": "sock_impl_set_options", 00:45:39.436 "params": { 00:45:39.436 "impl_name": "posix", 00:45:39.436 "recv_buf_size": 2097152, 00:45:39.436 "send_buf_size": 2097152, 00:45:39.436 "enable_recv_pipe": true, 00:45:39.436 "enable_quickack": false, 00:45:39.436 "enable_placement_id": 0, 00:45:39.436 "enable_zerocopy_send_server": true, 00:45:39.436 "enable_zerocopy_send_client": false, 00:45:39.436 "zerocopy_threshold": 0, 00:45:39.436 "tls_version": 0, 00:45:39.436 "enable_ktls": false 00:45:39.436 } 00:45:39.436 } 00:45:39.436 ] 00:45:39.436 }, 00:45:39.436 { 00:45:39.436 "subsystem": "vmd", 00:45:39.436 "config": [] 00:45:39.436 }, 00:45:39.436 { 00:45:39.436 "subsystem": "accel", 00:45:39.436 "config": [ 00:45:39.436 { 00:45:39.436 "method": "accel_set_options", 00:45:39.436 "params": { 00:45:39.436 "small_cache_size": 128, 00:45:39.436 "large_cache_size": 16, 00:45:39.436 "task_count": 2048, 00:45:39.436 "sequence_count": 2048, 00:45:39.436 "buf_count": 2048 00:45:39.436 } 00:45:39.436 } 00:45:39.436 ] 00:45:39.436 }, 00:45:39.436 { 00:45:39.436 "subsystem": "bdev", 00:45:39.436 "config": [ 00:45:39.436 { 00:45:39.436 "method": "bdev_set_options", 00:45:39.436 "params": { 00:45:39.436 "bdev_io_pool_size": 65535, 00:45:39.436 "bdev_io_cache_size": 256, 00:45:39.436 "bdev_auto_examine": true, 00:45:39.436 "iobuf_small_cache_size": 128, 00:45:39.436 "iobuf_large_cache_size": 16 00:45:39.436 } 00:45:39.436 }, 00:45:39.436 { 00:45:39.436 "method": "bdev_raid_set_options", 00:45:39.436 "params": { 00:45:39.436 "process_window_size_kb": 1024, 00:45:39.436 "process_max_bandwidth_mb_sec": 0 00:45:39.436 } 00:45:39.436 }, 00:45:39.436 { 00:45:39.436 "method": "bdev_iscsi_set_options", 00:45:39.436 "params": { 00:45:39.436 "timeout_sec": 30 00:45:39.436 } 00:45:39.436 }, 00:45:39.436 { 00:45:39.436 "method": "bdev_nvme_set_options", 00:45:39.436 "params": { 00:45:39.436 "action_on_timeout": "none", 00:45:39.436 "timeout_us": 0, 00:45:39.436 "timeout_admin_us": 0, 00:45:39.436 "keep_alive_timeout_ms": 10000, 00:45:39.436 "arbitration_burst": 0, 00:45:39.436 "low_priority_weight": 0, 00:45:39.436 "medium_priority_weight": 0, 00:45:39.436 "high_priority_weight": 0, 00:45:39.436 "nvme_adminq_poll_period_us": 10000, 00:45:39.436 "nvme_ioq_poll_period_us": 0, 00:45:39.436 "io_queue_requests": 512, 00:45:39.436 "delay_cmd_submit": true, 00:45:39.436 "transport_retry_count": 4, 00:45:39.436 "bdev_retry_count": 3, 00:45:39.436 "transport_ack_timeout": 0, 00:45:39.436 "ctrlr_loss_timeout_sec": 0, 00:45:39.436 "reconnect_delay_sec": 0, 00:45:39.436 "fast_io_fail_timeout_sec": 0, 00:45:39.436 "disable_auto_failback": false, 00:45:39.436 "generate_uuids": false, 00:45:39.436 "transport_tos": 0, 00:45:39.436 "nvme_error_stat": false, 00:45:39.436 "rdma_srq_size": 0, 00:45:39.436 "io_path_stat": false, 00:45:39.436 "allow_accel_sequence": false, 00:45:39.436 "rdma_max_cq_size": 0, 00:45:39.436 "rdma_cm_event_timeout_ms": 0, 00:45:39.436 "dhchap_digests": [ 00:45:39.436 "sha256", 00:45:39.436 "sha384", 00:45:39.436 "sha512" 00:45:39.436 ], 00:45:39.436 "dhchap_dhgroups": [ 00:45:39.436 "null", 00:45:39.436 "ffdhe2048", 00:45:39.436 "ffdhe3072", 00:45:39.436 "ffdhe4096", 00:45:39.436 "ffdhe6144", 00:45:39.436 "ffdhe8192" 00:45:39.436 ] 00:45:39.436 } 00:45:39.436 }, 00:45:39.436 { 00:45:39.436 "method": "bdev_nvme_attach_controller", 00:45:39.436 "params": { 00:45:39.436 "name": "nvme0", 00:45:39.436 "trtype": "TCP", 00:45:39.436 "adrfam": "IPv4", 00:45:39.436 "traddr": "127.0.0.1", 00:45:39.436 "trsvcid": "4420", 00:45:39.436 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:39.436 "prchk_reftag": false, 00:45:39.436 "prchk_guard": false, 00:45:39.436 "ctrlr_loss_timeout_sec": 0, 00:45:39.436 "reconnect_delay_sec": 0, 00:45:39.436 "fast_io_fail_timeout_sec": 0, 00:45:39.436 "psk": "key0", 00:45:39.436 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:39.436 "hdgst": false, 00:45:39.436 "ddgst": false, 00:45:39.436 "multipath": "multipath" 00:45:39.436 } 00:45:39.436 }, 00:45:39.436 { 00:45:39.436 "method": "bdev_nvme_set_hotplug", 00:45:39.436 "params": { 00:45:39.436 "period_us": 100000, 00:45:39.436 "enable": false 00:45:39.436 } 00:45:39.436 }, 00:45:39.436 { 00:45:39.436 "method": "bdev_wait_for_examine" 00:45:39.436 } 00:45:39.436 ] 00:45:39.436 }, 00:45:39.436 { 00:45:39.436 "subsystem": "nbd", 00:45:39.436 "config": [] 00:45:39.436 } 00:45:39.436 ] 00:45:39.436 }' 00:45:39.436 [2024-12-06 01:23:12.078532] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:45:39.436 [2024-12-06 01:23:12.078677] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3229215 ] 00:45:39.696 [2024-12-06 01:23:12.229160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:39.696 [2024-12-06 01:23:12.365606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:40.265 [2024-12-06 01:23:12.830077] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:40.524 01:23:13 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:40.524 01:23:13 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:45:40.524 01:23:13 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:45:40.524 01:23:13 keyring_file -- keyring/file.sh@121 -- # jq length 00:45:40.524 01:23:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:40.783 01:23:13 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:45:40.783 01:23:13 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:45:40.783 01:23:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:40.783 01:23:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:40.783 01:23:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:40.783 01:23:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:40.783 01:23:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:41.040 01:23:13 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:45:41.041 01:23:13 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:45:41.041 01:23:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:41.041 01:23:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:41.041 01:23:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:41.041 01:23:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:41.041 01:23:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:41.297 01:23:13 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:45:41.297 01:23:13 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:45:41.297 01:23:13 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:45:41.297 01:23:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:45:41.555 01:23:14 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:45:41.555 01:23:14 keyring_file -- keyring/file.sh@1 -- # cleanup 00:45:41.555 01:23:14 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.K7hosbL7Ww /tmp/tmp.u3aMDh7cHW 00:45:41.555 01:23:14 keyring_file -- keyring/file.sh@20 -- # killprocess 3229215 00:45:41.555 01:23:14 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3229215 ']' 00:45:41.555 01:23:14 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3229215 00:45:41.555 01:23:14 keyring_file -- common/autotest_common.sh@959 -- # uname 00:45:41.555 01:23:14 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:41.556 01:23:14 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3229215 00:45:41.556 01:23:14 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:41.556 01:23:14 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:41.556 01:23:14 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3229215' 00:45:41.556 killing process with pid 3229215 00:45:41.556 01:23:14 keyring_file -- common/autotest_common.sh@973 -- # kill 3229215 00:45:41.556 Received shutdown signal, test time was about 1.000000 seconds 00:45:41.556 00:45:41.556 Latency(us) 00:45:41.556 [2024-12-06T00:23:14.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:41.556 [2024-12-06T00:23:14.265Z] =================================================================================================================== 00:45:41.556 [2024-12-06T00:23:14.265Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:45:41.556 01:23:14 keyring_file -- common/autotest_common.sh@978 -- # wait 3229215 00:45:42.494 01:23:15 keyring_file -- keyring/file.sh@21 -- # killprocess 3227474 00:45:42.494 01:23:15 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3227474 ']' 00:45:42.494 01:23:15 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3227474 00:45:42.494 01:23:15 keyring_file -- common/autotest_common.sh@959 -- # uname 00:45:42.494 01:23:15 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:42.494 01:23:15 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3227474 00:45:42.494 01:23:15 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:42.494 01:23:15 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:42.494 01:23:15 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3227474' 00:45:42.494 killing process with pid 3227474 00:45:42.494 01:23:15 keyring_file -- common/autotest_common.sh@973 -- # kill 3227474 00:45:42.494 01:23:15 keyring_file -- common/autotest_common.sh@978 -- # wait 3227474 00:45:45.020 00:45:45.020 real 0m20.200s 00:45:45.020 user 0m45.669s 00:45:45.020 sys 0m3.782s 00:45:45.020 01:23:17 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:45.020 01:23:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:45.020 ************************************ 00:45:45.020 END TEST keyring_file 00:45:45.020 ************************************ 00:45:45.020 01:23:17 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:45:45.020 01:23:17 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:45.020 01:23:17 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:45:45.020 01:23:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:45.020 01:23:17 -- common/autotest_common.sh@10 -- # set +x 00:45:45.020 ************************************ 00:45:45.020 START TEST keyring_linux 00:45:45.020 ************************************ 00:45:45.020 01:23:17 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:45.020 Joined session keyring: 203388396 00:45:45.020 * Looking for test storage... 00:45:45.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:45.279 01:23:17 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:45:45.279 01:23:17 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:45:45.279 01:23:17 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:45:45.279 01:23:17 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@345 -- # : 1 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:45.279 01:23:17 keyring_linux -- scripts/common.sh@368 -- # return 0 00:45:45.279 01:23:17 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:45.279 01:23:17 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:45:45.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:45.279 --rc genhtml_branch_coverage=1 00:45:45.279 --rc genhtml_function_coverage=1 00:45:45.279 --rc genhtml_legend=1 00:45:45.279 --rc geninfo_all_blocks=1 00:45:45.279 --rc geninfo_unexecuted_blocks=1 00:45:45.279 00:45:45.279 ' 00:45:45.279 01:23:17 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:45:45.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:45.279 --rc genhtml_branch_coverage=1 00:45:45.279 --rc genhtml_function_coverage=1 00:45:45.279 --rc genhtml_legend=1 00:45:45.280 --rc geninfo_all_blocks=1 00:45:45.280 --rc geninfo_unexecuted_blocks=1 00:45:45.280 00:45:45.280 ' 00:45:45.280 01:23:17 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:45:45.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:45.280 --rc genhtml_branch_coverage=1 00:45:45.280 --rc genhtml_function_coverage=1 00:45:45.280 --rc genhtml_legend=1 00:45:45.280 --rc geninfo_all_blocks=1 00:45:45.280 --rc geninfo_unexecuted_blocks=1 00:45:45.280 00:45:45.280 ' 00:45:45.280 01:23:17 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:45:45.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:45.280 --rc genhtml_branch_coverage=1 00:45:45.280 --rc genhtml_function_coverage=1 00:45:45.280 --rc genhtml_legend=1 00:45:45.280 --rc geninfo_all_blocks=1 00:45:45.280 --rc geninfo_unexecuted_blocks=1 00:45:45.280 00:45:45.280 ' 00:45:45.280 01:23:17 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:45.280 01:23:17 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:45.280 01:23:17 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:45:45.280 01:23:17 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:45.280 01:23:17 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:45.280 01:23:17 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:45.280 01:23:17 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:45.280 01:23:17 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:45.280 01:23:17 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:45.280 01:23:17 keyring_linux -- paths/export.sh@5 -- # export PATH 00:45:45.280 01:23:17 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:45.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:45.280 01:23:17 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:45.280 01:23:17 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:45.280 01:23:17 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:45.280 01:23:17 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:45:45.280 01:23:17 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:45:45.280 01:23:17 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:45:45.280 01:23:17 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:45:45.280 01:23:17 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:45.280 01:23:17 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:45:45.280 01:23:17 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:45.280 01:23:17 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:45.280 01:23:17 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:45:45.280 01:23:17 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@733 -- # python - 00:45:45.280 01:23:17 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:45:45.280 01:23:17 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:45:45.280 /tmp/:spdk-test:key0 00:45:45.280 01:23:17 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:45:45.280 01:23:17 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:45.280 01:23:17 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:45:45.280 01:23:17 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:45.280 01:23:17 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:45.280 01:23:17 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:45:45.280 01:23:17 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:45:45.280 01:23:17 keyring_linux -- nvmf/common.sh@733 -- # python - 00:45:45.280 01:23:17 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:45:45.280 01:23:17 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:45:45.280 /tmp/:spdk-test:key1 00:45:45.280 01:23:17 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3230100 00:45:45.280 01:23:17 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:45.280 01:23:17 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3230100 00:45:45.280 01:23:17 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3230100 ']' 00:45:45.280 01:23:17 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:45.280 01:23:17 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:45.280 01:23:17 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:45.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:45.280 01:23:17 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:45.280 01:23:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:45.540 [2024-12-06 01:23:18.021641] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:45:45.540 [2024-12-06 01:23:18.021775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3230100 ] 00:45:45.540 [2024-12-06 01:23:18.159190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:45.800 [2024-12-06 01:23:18.281486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:46.736 01:23:19 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:46.736 01:23:19 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:45:46.736 01:23:19 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:45:46.736 01:23:19 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:46.736 01:23:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:46.736 [2024-12-06 01:23:19.123758] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:46.736 null0 00:45:46.736 [2024-12-06 01:23:19.155776] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:46.736 [2024-12-06 01:23:19.156419] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:46.736 01:23:19 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:46.736 01:23:19 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:45:46.736 545448674 00:45:46.736 01:23:19 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:45:46.736 193308828 00:45:46.736 01:23:19 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3230249 00:45:46.736 01:23:19 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3230249 /var/tmp/bperf.sock 00:45:46.736 01:23:19 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:45:46.736 01:23:19 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3230249 ']' 00:45:46.736 01:23:19 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:46.736 01:23:19 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:46.736 01:23:19 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:46.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:46.736 01:23:19 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:46.737 01:23:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:46.737 [2024-12-06 01:23:19.261300] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:45:46.737 [2024-12-06 01:23:19.261428] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3230249 ] 00:45:46.737 [2024-12-06 01:23:19.394330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:46.997 [2024-12-06 01:23:19.522643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:47.564 01:23:20 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:47.564 01:23:20 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:45:47.564 01:23:20 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:45:47.564 01:23:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:45:47.822 01:23:20 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:45:47.822 01:23:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:45:48.757 01:23:21 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:48.757 01:23:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:48.757 [2024-12-06 01:23:21.388195] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:49.015 nvme0n1 00:45:49.015 01:23:21 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:45:49.015 01:23:21 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:45:49.015 01:23:21 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:49.015 01:23:21 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:49.015 01:23:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:49.015 01:23:21 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:49.274 01:23:21 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:45:49.274 01:23:21 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:49.274 01:23:21 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:45:49.274 01:23:21 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:45:49.274 01:23:21 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:49.274 01:23:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:49.274 01:23:21 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:45:49.534 01:23:22 keyring_linux -- keyring/linux.sh@25 -- # sn=545448674 00:45:49.534 01:23:22 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:45:49.534 01:23:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:49.534 01:23:22 keyring_linux -- keyring/linux.sh@26 -- # [[ 545448674 == \5\4\5\4\4\8\6\7\4 ]] 00:45:49.534 01:23:22 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 545448674 00:45:49.534 01:23:22 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:45:49.534 01:23:22 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:49.534 Running I/O for 1 seconds... 00:45:50.473 7036.00 IOPS, 27.48 MiB/s 00:45:50.473 Latency(us) 00:45:50.473 [2024-12-06T00:23:23.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:50.473 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:45:50.473 nvme0n1 : 1.02 7057.22 27.57 0.00 0.00 17982.71 5728.33 25049.32 00:45:50.473 [2024-12-06T00:23:23.182Z] =================================================================================================================== 00:45:50.473 [2024-12-06T00:23:23.182Z] Total : 7057.22 27.57 0.00 0.00 17982.71 5728.33 25049.32 00:45:50.473 { 00:45:50.473 "results": [ 00:45:50.473 { 00:45:50.473 "job": "nvme0n1", 00:45:50.473 "core_mask": "0x2", 00:45:50.473 "workload": "randread", 00:45:50.473 "status": "finished", 00:45:50.473 "queue_depth": 128, 00:45:50.473 "io_size": 4096, 00:45:50.473 "runtime": 1.015273, 00:45:50.473 "iops": 7057.215152968709, 00:45:50.473 "mibps": 27.56724669128402, 00:45:50.473 "io_failed": 0, 00:45:50.473 "io_timeout": 0, 00:45:50.473 "avg_latency_us": 17982.705079734307, 00:45:50.473 "min_latency_us": 5728.331851851852, 00:45:50.473 "max_latency_us": 25049.315555555557 00:45:50.473 } 00:45:50.473 ], 00:45:50.473 "core_count": 1 00:45:50.473 } 00:45:50.473 01:23:23 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:50.473 01:23:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:51.042 01:23:23 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:45:51.042 01:23:23 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:45:51.042 01:23:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:51.042 01:23:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:51.042 01:23:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:51.042 01:23:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:51.042 01:23:23 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:45:51.042 01:23:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:51.042 01:23:23 keyring_linux -- keyring/linux.sh@23 -- # return 00:45:51.042 01:23:23 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:51.042 01:23:23 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:45:51.042 01:23:23 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:51.042 01:23:23 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:51.042 01:23:23 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:51.042 01:23:23 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:51.042 01:23:23 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:51.042 01:23:23 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:51.042 01:23:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:51.301 [2024-12-06 01:23:23.989983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 [2024-12-06 01:23:23.989984] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_re(107): Transport endpoint is not connected 00:45:51.301 ad_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:51.301 [2024-12-06 01:23:23.990957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (9): Bad file descriptor 00:45:51.301 [2024-12-06 01:23:23.991953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:45:51.301 [2024-12-06 01:23:23.991981] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:51.301 [2024-12-06 01:23:23.992012] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:51.301 [2024-12-06 01:23:23.992034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:45:51.301 request: 00:45:51.301 { 00:45:51.301 "name": "nvme0", 00:45:51.301 "trtype": "tcp", 00:45:51.301 "traddr": "127.0.0.1", 00:45:51.301 "adrfam": "ipv4", 00:45:51.301 "trsvcid": "4420", 00:45:51.301 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:51.301 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:51.301 "prchk_reftag": false, 00:45:51.301 "prchk_guard": false, 00:45:51.301 "hdgst": false, 00:45:51.301 "ddgst": false, 00:45:51.301 "psk": ":spdk-test:key1", 00:45:51.301 "allow_unrecognized_csi": false, 00:45:51.301 "method": "bdev_nvme_attach_controller", 00:45:51.301 "req_id": 1 00:45:51.301 } 00:45:51.301 Got JSON-RPC error response 00:45:51.301 response: 00:45:51.301 { 00:45:51.301 "code": -5, 00:45:51.301 "message": "Input/output error" 00:45:51.301 } 00:45:51.301 01:23:24 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:45:51.301 01:23:24 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:51.301 01:23:24 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:51.301 01:23:24 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:51.301 01:23:24 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:45:51.301 01:23:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:51.301 01:23:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:45:51.301 01:23:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:45:51.560 01:23:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:45:51.560 01:23:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:51.560 01:23:24 keyring_linux -- keyring/linux.sh@33 -- # sn=545448674 00:45:51.560 01:23:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 545448674 00:45:51.560 1 links removed 00:45:51.560 01:23:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:51.560 01:23:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:45:51.560 01:23:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:45:51.560 01:23:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:45:51.560 01:23:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:45:51.560 01:23:24 keyring_linux -- keyring/linux.sh@33 -- # sn=193308828 00:45:51.560 01:23:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 193308828 00:45:51.560 1 links removed 00:45:51.560 01:23:24 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3230249 00:45:51.560 01:23:24 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3230249 ']' 00:45:51.560 01:23:24 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3230249 00:45:51.560 01:23:24 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:45:51.560 01:23:24 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:51.560 01:23:24 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3230249 00:45:51.560 01:23:24 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:51.560 01:23:24 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:51.560 01:23:24 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3230249' 00:45:51.560 killing process with pid 3230249 00:45:51.560 01:23:24 keyring_linux -- common/autotest_common.sh@973 -- # kill 3230249 00:45:51.560 Received shutdown signal, test time was about 1.000000 seconds 00:45:51.560 00:45:51.560 Latency(us) 00:45:51.560 [2024-12-06T00:23:24.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:51.560 [2024-12-06T00:23:24.269Z] =================================================================================================================== 00:45:51.560 [2024-12-06T00:23:24.269Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:51.560 01:23:24 keyring_linux -- common/autotest_common.sh@978 -- # wait 3230249 00:45:52.506 01:23:24 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3230100 00:45:52.506 01:23:24 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3230100 ']' 00:45:52.506 01:23:24 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3230100 00:45:52.506 01:23:24 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:45:52.506 01:23:24 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:52.506 01:23:24 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3230100 00:45:52.506 01:23:25 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:52.506 01:23:25 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:52.506 01:23:25 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3230100' 00:45:52.506 killing process with pid 3230100 00:45:52.506 01:23:25 keyring_linux -- common/autotest_common.sh@973 -- # kill 3230100 00:45:52.506 01:23:25 keyring_linux -- common/autotest_common.sh@978 -- # wait 3230100 00:45:55.043 00:45:55.043 real 0m9.779s 00:45:55.043 user 0m16.901s 00:45:55.043 sys 0m1.892s 00:45:55.043 01:23:27 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:55.043 01:23:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:55.043 ************************************ 00:45:55.043 END TEST keyring_linux 00:45:55.043 ************************************ 00:45:55.043 01:23:27 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:45:55.043 01:23:27 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:45:55.043 01:23:27 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:45:55.043 01:23:27 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:45:55.043 01:23:27 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:45:55.043 01:23:27 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:45:55.043 01:23:27 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:45:55.043 01:23:27 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:45:55.043 01:23:27 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:45:55.043 01:23:27 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:45:55.043 01:23:27 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:45:55.043 01:23:27 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:45:55.043 01:23:27 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:45:55.043 01:23:27 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:45:55.043 01:23:27 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:45:55.043 01:23:27 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:45:55.043 01:23:27 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:45:55.043 01:23:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:55.043 01:23:27 -- common/autotest_common.sh@10 -- # set +x 00:45:55.043 01:23:27 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:45:55.043 01:23:27 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:45:55.043 01:23:27 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:45:55.043 01:23:27 -- common/autotest_common.sh@10 -- # set +x 00:45:56.949 INFO: APP EXITING 00:45:56.949 INFO: killing all VMs 00:45:56.949 INFO: killing vhost app 00:45:56.949 INFO: EXIT DONE 00:45:57.889 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:45:57.889 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:45:57.889 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:45:57.889 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:45:57.889 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:45:57.889 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:45:57.889 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:45:57.889 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:45:57.889 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:45:57.889 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:45:57.889 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:45:57.889 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:45:57.889 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:45:57.889 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:45:57.889 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:45:57.889 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:45:57.889 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:45:59.265 Cleaning 00:45:59.265 Removing: /var/run/dpdk/spdk0/config 00:45:59.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:59.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:59.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:59.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:59.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:45:59.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:45:59.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:45:59.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:45:59.265 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:59.265 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:59.265 Removing: /var/run/dpdk/spdk1/config 00:45:59.265 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:45:59.265 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:45:59.265 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:45:59.265 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:45:59.265 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:45:59.265 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:45:59.265 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:45:59.265 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:45:59.265 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:45:59.265 Removing: /var/run/dpdk/spdk1/hugepage_info 00:45:59.265 Removing: /var/run/dpdk/spdk2/config 00:45:59.265 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:45:59.265 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:45:59.265 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:45:59.265 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:45:59.265 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:45:59.265 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:45:59.265 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:45:59.265 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:45:59.265 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:45:59.265 Removing: /var/run/dpdk/spdk2/hugepage_info 00:45:59.265 Removing: /var/run/dpdk/spdk3/config 00:45:59.265 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:45:59.265 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:45:59.266 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:45:59.266 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:45:59.266 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:45:59.266 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:45:59.266 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:45:59.266 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:45:59.266 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:45:59.266 Removing: /var/run/dpdk/spdk3/hugepage_info 00:45:59.266 Removing: /var/run/dpdk/spdk4/config 00:45:59.266 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:45:59.266 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:45:59.266 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:45:59.266 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:45:59.266 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:45:59.266 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:45:59.266 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:45:59.266 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:45:59.266 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:45:59.266 Removing: /var/run/dpdk/spdk4/hugepage_info 00:45:59.266 Removing: /dev/shm/bdev_svc_trace.1 00:45:59.266 Removing: /dev/shm/nvmf_trace.0 00:45:59.266 Removing: /dev/shm/spdk_tgt_trace.pid2815568 00:45:59.266 Removing: /var/run/dpdk/spdk0 00:45:59.266 Removing: /var/run/dpdk/spdk1 00:45:59.266 Removing: /var/run/dpdk/spdk2 00:45:59.266 Removing: /var/run/dpdk/spdk3 00:45:59.266 Removing: /var/run/dpdk/spdk4 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2812571 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2813706 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2815568 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2816614 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2817755 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2818224 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2819155 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2819329 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2819943 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2821409 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2822596 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2823191 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2823665 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2824273 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2824869 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2825027 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2825311 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2825511 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2825954 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2828715 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2829277 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2829830 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2829974 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2831210 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2831457 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2832711 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2832849 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2833312 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2833542 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2833850 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2834074 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2835155 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2835315 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2835643 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2838180 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2841068 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2848986 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2849497 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2852162 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2852439 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2855359 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2859345 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2861672 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2868904 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2874654 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2876102 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2877409 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2888714 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2891276 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2949255 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2952707 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2956931 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2963166 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2993178 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2996385 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2997566 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2999021 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2999417 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2999691 00:45:59.266 Removing: /var/run/dpdk/spdk_pid2999973 00:45:59.266 Removing: /var/run/dpdk/spdk_pid3000815 00:45:59.266 Removing: /var/run/dpdk/spdk_pid3002268 00:45:59.266 Removing: /var/run/dpdk/spdk_pid3003779 00:45:59.266 Removing: /var/run/dpdk/spdk_pid3004479 00:45:59.266 Removing: /var/run/dpdk/spdk_pid3006367 00:45:59.266 Removing: /var/run/dpdk/spdk_pid3007179 00:45:59.266 Removing: /var/run/dpdk/spdk_pid3007894 00:45:59.266 Removing: /var/run/dpdk/spdk_pid3010680 00:45:59.266 Removing: /var/run/dpdk/spdk_pid3014359 00:45:59.266 Removing: /var/run/dpdk/spdk_pid3014360 00:45:59.266 Removing: /var/run/dpdk/spdk_pid3014361 00:45:59.266 Removing: /var/run/dpdk/spdk_pid3016852 00:45:59.266 Removing: /var/run/dpdk/spdk_pid3019207 00:45:59.266 Removing: /var/run/dpdk/spdk_pid3023344 00:45:59.266 Removing: /var/run/dpdk/spdk_pid3046991 00:45:59.266 Removing: /var/run/dpdk/spdk_pid3050527 00:45:59.266 Removing: /var/run/dpdk/spdk_pid3054550 00:45:59.266 Removing: /var/run/dpdk/spdk_pid3056032 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3057707 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3059265 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3062293 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3065404 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3068124 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3072799 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3072804 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3075966 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3076105 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3076239 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3076629 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3076639 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3077838 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3079081 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3080816 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3082002 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3083177 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3084470 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3088414 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3088860 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3090260 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3091136 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3095127 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3097237 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3101039 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3104634 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3112119 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3116859 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3116868 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3129896 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3130558 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3131229 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3131888 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3132871 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3133413 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3134075 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3134626 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3137523 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3137849 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3142459 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3142764 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3146276 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3149032 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3156188 00:45:59.525 Removing: /var/run/dpdk/spdk_pid3156599 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3159239 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3159517 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3162412 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3166365 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3168719 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3176433 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3182018 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3183326 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3184124 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3195087 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3197614 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3199849 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3205323 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3205332 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3208731 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3210754 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3212274 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3213256 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3214883 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3215877 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3221416 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3222042 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3222458 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3224346 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3224747 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3225025 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3227474 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3227619 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3229215 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3230100 00:45:59.526 Removing: /var/run/dpdk/spdk_pid3230249 00:45:59.526 Clean 00:45:59.526 01:23:32 -- common/autotest_common.sh@1453 -- # return 0 00:45:59.526 01:23:32 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:45:59.526 01:23:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:59.526 01:23:32 -- common/autotest_common.sh@10 -- # set +x 00:45:59.526 01:23:32 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:45:59.526 01:23:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:59.526 01:23:32 -- common/autotest_common.sh@10 -- # set +x 00:45:59.526 01:23:32 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:59.785 01:23:32 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:45:59.785 01:23:32 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:45:59.785 01:23:32 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:45:59.785 01:23:32 -- spdk/autotest.sh@398 -- # hostname 00:45:59.785 01:23:32 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:45:59.785 geninfo: WARNING: invalid characters removed from testname! 00:46:31.915 01:24:02 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:33.822 01:24:06 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:38.062 01:24:10 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:41.353 01:24:13 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:45.551 01:24:17 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:48.847 01:24:21 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:53.045 01:24:25 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:46:53.045 01:24:25 -- spdk/autorun.sh@1 -- $ timing_finish 00:46:53.045 01:24:25 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:46:53.045 01:24:25 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:46:53.045 01:24:25 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:46:53.045 01:24:25 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:46:53.045 + [[ -n 2741118 ]] 00:46:53.045 + sudo kill 2741118 00:46:53.054 [Pipeline] } 00:46:53.068 [Pipeline] // stage 00:46:53.072 [Pipeline] } 00:46:53.085 [Pipeline] // timeout 00:46:53.089 [Pipeline] } 00:46:53.101 [Pipeline] // catchError 00:46:53.106 [Pipeline] } 00:46:53.119 [Pipeline] // wrap 00:46:53.124 [Pipeline] } 00:46:53.135 [Pipeline] // catchError 00:46:53.143 [Pipeline] stage 00:46:53.144 [Pipeline] { (Epilogue) 00:46:53.155 [Pipeline] catchError 00:46:53.157 [Pipeline] { 00:46:53.168 [Pipeline] echo 00:46:53.169 Cleanup processes 00:46:53.175 [Pipeline] sh 00:46:53.460 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:53.460 3244600 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:53.473 [Pipeline] sh 00:46:53.755 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:53.755 ++ grep -v 'sudo pgrep' 00:46:53.755 ++ awk '{print $1}' 00:46:53.755 + sudo kill -9 00:46:53.755 + true 00:46:53.766 [Pipeline] sh 00:46:54.049 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:47:06.256 [Pipeline] sh 00:47:06.546 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:47:06.546 Artifacts sizes are good 00:47:06.561 [Pipeline] archiveArtifacts 00:47:06.567 Archiving artifacts 00:47:06.759 [Pipeline] sh 00:47:07.072 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:47:07.085 [Pipeline] cleanWs 00:47:07.094 [WS-CLEANUP] Deleting project workspace... 00:47:07.094 [WS-CLEANUP] Deferred wipeout is used... 00:47:07.101 [WS-CLEANUP] done 00:47:07.102 [Pipeline] } 00:47:07.118 [Pipeline] // catchError 00:47:07.128 [Pipeline] sh 00:47:07.405 + logger -p user.info -t JENKINS-CI 00:47:07.414 [Pipeline] } 00:47:07.426 [Pipeline] // stage 00:47:07.431 [Pipeline] } 00:47:07.444 [Pipeline] // node 00:47:07.449 [Pipeline] End of Pipeline 00:47:07.481 Finished: SUCCESS